This repository provides terraform modules and ready to use end-to-end examples for Apigee.
- terraform cli on your PATH in version >= 1.1 (or according to the version constraint in the respective module).
Currently the following modules are a available and can be used either as part of the end-to-end examples or as part of your own scripting:
- Apigee X Core Configures a complete Apigee X organization with multiple instances, environment groups, and environments.
- Apigee X Bridge MIG Configures a managed instance group of network bridge GCE instances (VMs) that can be used as a load balancer backend and forward traffic to the internal Apigee X endpoint.
- Apigee X mTLS MIG Configures a managed instance group of Envoy proxies that can be used to terminate mutual TLS and forward traffic to the internal Apigee X endpoint.
- L7 external LB for MIG Configures an external HTTPS Cloud Load Balancer that fronts managed instance groups.
- L4 external LB for MIG Configures an external TCP Proxy that fronts managed instance groups.
- Routing Appliance Configures a routing appliance and custom routes to overcome transitive peering problems.
- Northbound PSC Backend Private Service Connect (PSC) Network Endpoint Group (NEG) backend with an HTTPS external load balancer.
- Southbound PSC Backend Private Service Connect (PSC) service attachment and Apigee endpoint attachment.
- Development Backend Configures an example HTTP backend and an internal load balancer.
- NIP.io Development Hostname Configures an external IP address and hostname based on the IP and the nip.io mechanism as well as a Google-managed SSL certificate.
Set the project ID where you want your Apigee Organization to be deployed to:
PROJECT_ID=my-project-id
Select one of the available sample deployments:
-
X Basic for a basic Apigee X setup with the raw instance endpoints exposed as internal IP addresses.
-
X with external L7 LB for an Apigee X setup that is exposed via a global external L7 load balancer.
-
X with external L7 LB and northbound PSC for an Apigee X setup that uses a global external L7 load balancer and a Private Service Connect (PSC) Network Endpoint Group (NEG) to connect to an Apigee instance's service attachment.
-
X with southbound PSC for an Apigee X setup that uses Private Service Connect (PSC) to connect to a backend service in another VPC.
-
X with internal L4 LB and mTLS for a basic Apigee X setup plus exposure via regional L4 load balancer and envoy proxy to terminate mTLS.
-
X with external L4 LB and mTLS for a basic Apigee X setup plus exposure via global external L4 load balancer and envoy proxy to terminate mTLS.
-
X with network appliance for transitive peering for an Apigee X organization that is peered to a network is transitively peered to another VPC that contains the backend. To deploy the sample, first create a copy of the example variables and edit according to your requirements.
-
X with DNS peering for a basic Apigee X setup with DNS peering with a private DNS Zone containing records for Apigee and an example backend.
-
X with controlled internet egress for a basic Apigee X setup with the runtime's internet egress routed via a firewall appliance in the customer's VPC.
-
X with Shared VPC for an Apigee X setup in a Shared VPC that is exposed via a global external L7 load balancer.
-
X with Multi Region for an Apigee X setup in a Shared VPC exposed in multiple GCP Regions via a global L7 load balancer. Note that the sample uses an EVAL Apigee X Organization and hence a single Apigee X Instance only. In case you have a PROD Apigee X Organization then you will be able to easily extend the sample accordingly.
-
X with IaC Automation Pipeline for an IaC Automation Pipeline Apigee X setup in a Shared VPC exposed in multiple GCP Regions via a global L7 load balancer. Note that the sample uses an EVAL Apigee X Organization and hence a single Apigee X Instance only. In case you have a PROD Apigee X Organization then you will be able to easily extend the sample accordingly.
-
Hybrid on GKE (Preview) for an Apigee hybrid setup on Google Kubernetes Engine that uses the new installation tooling based on kustomize.
cd samples/... # Sample from above
cp ./x-demo.tfvars ./my-config.tfvars
Decide on a backend and create the necessary config. To use a backend on Google Cloud Storage (GCS) use:
gsutil mb "gs://$PROJECT_ID-tf"
cat <<EOF >terraform.tf
terraform {
backend "gcs" {
bucket = "$PROJECT_ID-tf"
prefix = "terraform/state"
}
}
EOF
Validate your config:
terraform init
terraform plan --var-file=./my-config.tfvars -var "project_id=$PROJECT_ID"
and provision everything (takes roughly 25min):
terraform apply --var-file=./my-config.tfvars -var "project_id=$PROJECT_ID"
- Currently, there are no known issues specific to this module.
- Feel free to create an issue if you came across anything.
- Please also see the list of open issues in the upstream terraform provider that could be inherited by this module.
All solutions within this repository are provided under the Apache 2.0 license. Please see the LICENSE file for more detailed terms and conditions.
This repository and its contents are not an official Google product.