Skip to content

MaterializeInc/terraform-google-materialize

Repository files navigation

Materialize on Google Cloud Platform

Terraform module for deploying Materialize on Google Cloud Platform (GCP) with all required infrastructure components.

This module sets up:

  • GKE cluster for Materialize workloads
  • Cloud SQL PostgreSQL instance for metadata storage
  • Cloud Storage bucket for persistence
  • Required networking and security configurations
  • Service accounts with proper IAM permissions

Warning

This module is intended for demonstration/evaluation purposes as well as for serving as a template when building your own production deployment of Materialize.

This module should not be directly relied upon for production deployments: future releases of the module will contain breaking changes. Instead, to use as a starting point for your own production deployment, either:

  • Fork this repo and pin to a specific version, or
  • Use the code as a reference when developing your own deployment.

The module has been tested with:

  • GKE version 1.28
  • PostgreSQL 15
  • Materialize Operator v0.1.0

Requirements

Name Version
terraform >= 1.0
google >= 6.0
helm ~> 2.0
kubernetes ~> 2.0

Providers

No providers.

Modules

Name Source Version
certificates ./modules/certificates n/a
database ./modules/database n/a
gke ./modules/gke n/a
load_balancers ./modules/load_balancers n/a
networking ./modules/networking n/a
operator github.com/MaterializeInc/terraform-helm-materialize v0.1.9
storage ./modules/storage n/a

Resources

No resources.

Inputs

Name Description Type Default Required
cert_manager_chart_version Version of the cert-manager helm chart to install. string "v1.17.1" no
cert_manager_install_timeout Timeout for installing the cert-manager helm chart, in seconds. number 300 no
cert_manager_namespace The name of the namespace in which cert-manager is or will be installed. string "cert-manager" no
database_config Cloud SQL configuration
object({
tier = optional(string, "db-custom-2-4096")
version = optional(string, "POSTGRES_15")
password = string
username = optional(string, "materialize")
db_name = optional(string, "materialize")
})
n/a yes
gke_config GKE cluster configuration. Make sure to use large enough machine types for your Materialize instances.
object({
node_count = number
machine_type = string
disk_size_gb = number
min_nodes = number
max_nodes = number
})
{
"disk_size_gb": 50,
"machine_type": "e2-standard-4",
"max_nodes": 2,
"min_nodes": 1,
"node_count": 1
}
no
helm_chart Chart name from repository or local path to chart. For local charts, set the path to the chart directory. string "materialize-operator" no
helm_values Values to pass to the Helm chart any {} no
install_cert_manager Whether to install cert-manager. bool true no
install_materialize_operator Whether to install the Materialize operator bool true no
install_metrics_server Whether to install the metrics-server for the Materialize Console. Defaults to false since GKE installs one by default in the kube-system namespace. Only set to true if the GKE cluster was deployed with monitoring explicitly turned off. Refer to the GKE docs for more information, including impact to GKE customer support efforts. bool false no
labels Labels to apply to all resources map(string) {} no
materialize_instances Configuration for Materialize instances
list(object({
name = string
namespace = optional(string)
database_name = string
create_database = optional(bool, true)
create_load_balancer = optional(bool, true)
internal_load_balancer = optional(bool, true)
environmentd_version = optional(string)
cpu_request = optional(string, "1")
memory_request = optional(string, "1Gi")
memory_limit = optional(string, "1Gi")
in_place_rollout = optional(bool, false)
request_rollout = optional(string)
force_rollout = optional(string)
balancer_memory_request = optional(string, "256Mi")
balancer_memory_limit = optional(string, "256Mi")
balancer_cpu_request = optional(string, "100m")
}))
[] no
namespace Kubernetes namespace for Materialize string "materialize" no
network_config Network configuration for the GKE cluster
object({
subnet_cidr = string
pods_cidr = string
services_cidr = string
})
n/a yes
operator_namespace Namespace for the Materialize operator string "materialize" no
operator_version Version of the Materialize operator to install string null no
orchestratord_version Version of the Materialize orchestrator to install string null no
prefix Prefix to be used for resource names string "materialize" no
project_id The ID of the project where resources will be created string n/a yes
region The region where resources will be created string "us-central1" no
use_local_chart Whether to use a local chart instead of one from a repository bool false no
use_self_signed_cluster_issuer Whether to install and use a self-signed ClusterIssuer for TLS. To work around limitations in Terraform, this will be treated as false if no materialize instances are defined. bool true no

Outputs

Name Description
connection_strings Formatted connection strings for Materialize
database Cloud SQL instance details
gke_cluster GKE cluster details
load_balancer_details Details of the Materialize instance load balancers.
network Network details
operator Materialize operator details
service_accounts Service account details
storage GCS bucket details

Connecting to Materialize instances

Access to the database is through the balancerd pods on:

  • Port 6875 for SQL connections.
  • Port 6876 for HTTP(S) connections.

Access to the web console is through the console pods on port 8080.

TLS support

TLS support is provided by using cert-manager and a self-signed ClusterIssuer.

More advanced TLS support using user-provided CAs or per-Materialize Issuers are out of scope for this Terraform module. Please refer to the cert-manager documentation for detailed guidance on more advanced usage.

Upgrade Notes

v0.3.0

We now install cert-manager and configure a self-signed ClusterIssuer by default.

Due to limitations in Terraform, it cannot plan Kubernetes resources using CRDs that do not exist yet. We have worked around this for new users by only generating the certificate resources when creating Materialize instances that use them, which also cannot be created on the first run.

For existing users upgrading Materialize instances not previously configured for TLS:

  1. Leave install_cert_manager at its default of true.
  2. Set use_self_signed_cluster_issuer to false.
  3. Run terraform apply. This will install cert-manager and its CRDs.
  4. Set use_self_signed_cluster_issuer back to true (the default).
  5. Update the request_rollout field of the Materialize instance.
  6. Run terraform apply. This will generate the certificates and configure your Materialize instance to use them.