This module provides a quick and easy way to launch the Lifecycle application on different cloud providers with support for multiple DNS providers. It is ideal for testing, demos, and learning—do not use in production!
- Multi-cloud support: AWS EKS, GCP GKE, and (coming soon) OpenStack
- Multi-DNS support: Cloudflare, AWS Route 53, GCP Cloud DNS
- Self-contained logic: No external OpenTofu/Terraform modules—everything is implemented in internal submodules
- Minimal infrastructure: Creates just the necessary resources for the Lifecycle app, optimizing cost
Cloud Provider | Module Parameter | Status |
---|---|---|
Amazon EKS | cluster_provider = "eks" |
Stable |
Google GKE | cluster_provider = "gke" |
Stable |
OpenStack | cluster_provider = "openstack" |
Coming Soon |
DNS Provider | Parameter Key | Status |
---|---|---|
Cloudflare | dns_provider = "cloudflare" |
Stable |
AWS Route 53 | dns_provider = "route53" |
Stable |
GCP Cloud DNS | dns_provider = "cloud-dns" |
Stable |
You can mix-and-match, e.g. AWS EKS with Cloudflare DNS or GKE with Route 53.
-
OpenTofu CLI installed and initialized. OpenTofu is an open-source infrastructure-as-code tool and is fully compatible with Terraform v0.13+ syntax.
-
A cloud account for your chosen provider:
- AWS: free tier available
- GCP: free $300 credit trial
- OpenStack: self-hosted (coming soon)
-
DNS account for your chosen DNS provider:
- Cloudflare: free tier includes 1 zone
- Route 53: pay-as-you-go
- Cloud DNS: pay-as-you-go
git clone https://github.com/GoodRxOSS/lifecycle-opentofu.git
cd opentofu-lifecycle
-
Create an IAM user with a programmatic key (for testing only!).
-
Attach AdministratorAccess policy (or fine-grained DNS, EKS, VPC permissions).
-
Configure AWS CLI profile:
aws configure --profile lifecycle-oss-eks
-
Optional: Create a DNS zone, delegate NS records at your registrar.
-
Create a Google Cloud project and enable the Kubernetes Engine API.
-
Install and authenticate
gcloud
CLI. -
Get credentials:
gcloud config set project lifecycle-oss-123456 gcloud auth application-default login
-
Optional: Create a DNS zone, delegate NS records at your registrar.
- Sign up for Cloudflare and add your domain.
- Create a DNS zone, delegate NS records at your registrar.
- Create an API token with Zone.Zone, Zone.DNS:Edit permissions.
- Save the token securely (e.g., in a secret manager).
cp example.auto.tfvars secrets.auto.tfvars
# Edit secrets.auto.tfvars with your values
tofu init
tofu plan
tofu apply
* Sometimes, running tofu apply
once is not enough to fully provision all resources. This can happen due to eventual consistency in cloud APIs or delays in external systems.
Common reasons why multiple tofu apply runs may be needed:
-
DNS Propagation — Some cloud resources depend on DNS names that may not resolve immediately after being created. Dependent resources may fail on the first run.
-
Service Readiness — If a resource (e.g., Load Balancer, DB instance) needs time to become fully ready, another resource depending on it might fail during the same apply.
-
IAM Permissions Delay — Recently updated roles or policies might not be fully propagated across the provider’s infrastructure.
-
Rate Limits / API Race Conditions — Some providers impose soft throttling or transient errors during rapid provisioning.
✅ Solution: Just run tofu apply again. OpenTofu is designed to pick up from the current state and continue applying remaining changes. No need to worry — this is part of normal behavior when working with eventual-consistency cloud environments.
After running tofu apply
, you should see a cheatsheet like this:
-
Amazon EKS
help = <<EOT Quick help of usage [eks]: - Update `kubeconfig` file: $ aws eks update-kubeconfig --name lifecycle-oss --region us-west-2 --profile lifecycle-oss-eks - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace $ kubectl -n lifecycle-app get pods - Check public endpoint, DNS, certificates etc. $ curl -v https://kuard.example.com EOT
-
Google GKE
help = <<EOT Quick help of usage [gke]: - Update `kubeconfig` file: $ gcloud container clusters get-credentials lifecycle-oss --zone us-central1-b --project lifecycle-oss-123456 - Check cluster permissions, e.g. List Pods in `lifecycle-app` namespace $ kubectl -n lifecycle-app get pods - Check public endpoint, DNS, certificates etc. $ curl -v https://kuard.example.com EOT
This is an autogenerated cheatsheet: copy, paste and run 🚀
tofu destroy
* Sometimes, when you run tofu destroy
, not all resources are removed in a single attempt — and that’s expected in some cases. Network delays, expired credentials, or external system propagation (like DNS or IAM updates) can temporarily block proper cleanup. Just try running tofu destroy
again after a short wait.
Before resorting to manual intervention:
-
Run tofu destroy multiple times to give the system time to resolve dependencies and update states.
-
If a resource still cannot be destroyed automatically (due to external constraints or provider API limitations), only then consider manual deletion, and document the action carefully to avoid state drift.
Manually removing resources can lead to:
-
Inconsistent state files
-
Broken dependencies on future deployments
Name | Version |
---|---|
aws | ~> 5.0 |
cloudflare | ~> 5.0 |
~> 6.0 | |
helm | ~> 2.0 |
kubectl | ~> 1.0 |
kubernetes | ~> 2.0 |
random | ~> 3.0 |
tls | ~> 4.0 |
Name | Version |
---|---|
helm | 2.17.0 |
kubectl | 1.19.0 |
kubernetes | 2.37.1 |
random | 3.7.2 |
Name | Source | Version |
---|---|---|
cloud_dns | ./modules/gcp-cloud-dns | n/a |
cloudflare | ./modules/cloudflare-dns | n/a |
eks | ./modules/aws-eks | n/a |
gke | ./modules/gcp-gke | n/a |
route53 | ./modules/aws-route53 | n/a |
Name | Type |
---|---|
helm_release.app_buildkit | resource |
helm_release.app_distribution | resource |
helm_release.app_postgres | resource |
helm_release.app_redis | resource |
helm_release.cert_manager | resource |
helm_release.cluster_autoscaler | resource |
helm_release.ingress_nginx_controller | resource |
kubectl_manifest.letsencrypt_clusterissuer | resource |
kubectl_manifest.letsencrypt_dns_certificate | resource |
kubectl_manifest.letsencrypt_dns_clusterissuer | resource |
kubectl_manifest.letsencrypt_dns_credentials_secret | resource |
kubectl_manifest.wildcard_certificate_secret | resource |
kubernetes_config_map_v1.app_config | resource |
kubernetes_deployment.this | resource |
kubernetes_ingress_v1.this | resource |
kubernetes_namespace_v1.app | resource |
kubernetes_secret_v1.app_postgres | resource |
kubernetes_secret_v1.app_redis | resource |
kubernetes_service.this | resource |
kubernetes_storage_class.aws_gp3 | resource |
random_password.app_postgres | resource |
random_password.app_redis | resource |
kubernetes_service.ingress_nginx_controller | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
app_buildkit_enabled | Toggle to control whether BuildKit is deployed (e.g., for image builds). | bool |
true |
no |
app_distribution_enabled | Toggle to enable or disable the distribution module (e.g., API, frontend). | bool |
true |
no |
app_distribution_subdomain | Subdomain used to expose the distribution module. | string |
"distribution" |
no |
app_domain | n/a | string |
"example.com" |
no |
app_enabled | Global toggle to enable or disable the entire application deployment. | bool |
true |
no |
app_namespace | n/a | string |
"application-env" |
no |
app_postgres_database | Name of the PostgreSQL database to create and use. | string |
"lifecycle" |
no |
app_postgres_enabled | Toggle to control whether PostgreSQL is deployed. | bool |
true |
no |
app_postgres_port | Port used to connect to the PostgreSQL service. | number |
5432 |
no |
app_postgres_username | Username for accessing the PostgreSQL database. | string |
"lifecycle" |
no |
app_redis_enabled | Toggle to control whether Redis is deployed. | bool |
true |
no |
app_redis_port | Port used to connect to the Redis service. | number |
6379 |
no |
app_subdomain | Subdomain used to expose the Application module. | string |
"app" |
no |
aws_profile | The AWS CLI profile name to use for authentication and authorization when interacting with AWS services. This profile should be configured in your AWS credentials file (usually located at ~/.aws/credentials). The profile name must: - Be a non-empty string - Contain only alphanumeric characters, underscores (_), hyphens (-), and dots (.) - Start and end with an alphanumeric character Example valid profile names: - default - lifecycle-oss-eks - my_profile-1 Note: Make sure the profile exists and has the necessary permissions. |
string |
"default" |
no |
aws_region | The AWS region where the EKS cluster and related resources will be deployed. Example: "us-east-1", "eu-west-1", "us-west-2" |
string |
"us-west-2" |
no |
cloudflare_api_token | n/a | string |
null |
no |
cluster_name | The name of the Kubernetes cluster. Must consist of alphanumeric characters, dashes, and be 1–100 characters long. |
string |
"k8s-cluster" |
no |
cluster_provider | n/a | string |
"eks" |
no |
dns_provider | n/a | string |
"route53" |
no |
gcp_credentials_file | n/a | string |
null |
no |
gcp_project | The Google Cloud Project ID to use for creating and managing resources. This should be the unique identifier of your GCP project. If not provided (null), some modules might attempt to infer the project from your environment or credentials. Format requirements: - Length between 6 and 30 characters - Lowercase letters, digits, and hyphens only - Must start with a lowercase letter - Cannot end with a hyphen |
string |
null |
no |
gcp_region | The Google Cloud region or zone where the GKE cluster is deployed. Example: "us-central1" or "us-central1-b" |
string |
"us-central1-b" |
no |
Name | Description |
---|---|
help | Quick help of usage |
This module is provided as-is for demonstration and testing purposes. Do not use it in production environments without proper security review and adaptation.
Contributions, issues, and feature requests are welcome! Please open an issue or submit a pull request on the GitHub repository.
This project is licensed under the Apache License 2.0. See LICENSE for details.