|
|
Provisioning GCP Resources with Terraform
Author: Venkata Sudhakar
Google Cloud Platform (GCP) is one of the most popular targets for enterprise cloud migrations. Terraform with the hashicorp/google provider gives you a consistent, versioned, repeatable way to provision GCP resources across dev, staging and production environments. Rather than clicking through the GCP console (which is hard to audit and impossible to replicate exactly), all infrastructure is defined as code, reviewed in pull requests, and applied through a CI/CD pipeline. This section covers the three most commonly needed GCP resources in data migration and AI workloads: GCS buckets, Cloud SQL, and GKE clusters. The Google provider authenticates using Application Default Credentials (ADC). In development, run gcloud auth application-default login. In CI/CD pipelines or GKE, use a service account key JSON file or Workload Identity. Setting the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of a service account key file is the most common approach for automated pipelines. Always grant the service account only the minimum IAM roles it needs - typically roles/storage.admin for GCS, roles/cloudsql.admin for Cloud SQL, and roles/container.admin for GKE. The below example shows a complete Terraform configuration provisioning three foundational GCP resources: a GCS bucket for data lake storage, a Cloud SQL PostgreSQL instance, and a GKE Autopilot cluster.
It gives the following output,
Terraform will perform the following actions:
+ google_compute_network.vpc
+ google_compute_subnetwork.subnet
+ google_storage_bucket.data_lake
+ google_sql_database_instance.postgres
+ google_sql_database.appdb
+ google_container_cluster.gke
Plan: 6 to add, 0 to change, 0 to destroy.
Do you want to perform these actions? yes
google_compute_network.vpc: Creating... [3s]
google_compute_subnetwork.subnet: Creating... [8s]
google_storage_bucket.data_lake: Creating... [2s]
google_sql_database_instance.postgres: Creating... [5m30s - Cloud SQL takes a few minutes]
google_container_cluster.gke: Creating... [3m15s - GKE Autopilot provisioning]
google_sql_database.appdb: Creating... [3s]
Apply complete! Resources: 6 added, 0 changed, 0 destroyed.
Outputs:
bucket_name = "my-gcp-project-dev-data-lake"
sql_connection_name = "my-gcp-project:us-central1:my-gcp-project-dev-postgres"
gke_kubeconfig_command = "gcloud container clusters get-credentials my-gcp-project-dev-gke --region us-central1 --project my-gcp-project"
Production hardening tips for GCP Terraform: deletion_protection = true on Cloud SQL and GKE clusters in production - this prevents terraform destroy from accidentally deleting live databases. Enable VPC-native clusters for GKE to get native Pod IP addresses from the subnet CIDR, which simplifies network policy and firewall rules. Use google_secret_manager_secret to store the Cloud SQL password rather than putting it in terraform.tfvars, and reference it with data.google_secret_manager_secret_version. Remote state locking in GCS with a bucket that has versioning enabled ensures your terraform.tfstate is never corrupted by concurrent applies.
|
|