GCP Introduction
GCP Pricing
GCP Threats
GCP Misconfigurations
- Getting Started with GCP Audit
- CloudSql Audit
- Cloud Tasks Monitoring
- Dataflow Monitoring
- Function Monitoring
- Monitoring Compliance
- PubSubLite Monitoring
- Spanner Monitoring
- NoSQL Monitoring
- Compute Audit
- IAM Audit
- BigQuery Monitoring
- CDN Monitoring
- DNS Monitoring
- KMS Monitoring
- Kubernetes Audit
- Load Balancer Monitoring
- Log Monitoring
- Storage Audit
- Pub/Sub Monitoring
- VPC Audit
- IAM Deep Dive
GCP Threats
Worker Pool Teardown Policy Should Be Set
More Info:
Ensure worker pool teardown policy is set
Risk Level
Low
Address
Operational Maturity, Reliability
Compliance Standards
CBP
Triage and Remediation
Remediation
To remediate the “Worker Pool Teardown Policy Should Be Set” misconfiguration in GCP using GCP console, please follow the below steps:
-
Open the GCP Console and navigate to the Cloud Build page.
-
Click on the “Worker pools” tab from the left-hand menu.
-
Select the worker pool for which you want to set the teardown policy.
-
Click on the “Edit” button at the top of the page.
-
Scroll down to the “Teardown policy” section.
-
Select the “Delete instances when the pool is idle” option.
-
Click on the “Save” button at the bottom of the page.
-
Verify that the teardown policy has been set correctly by checking the “Teardown policy” section for the worker pool.
By following these steps, you will have successfully remediated the “Worker Pool Teardown Policy Should Be Set” misconfiguration in GCP using GCP console.
To remediate the “Worker Pool Teardown Policy Should Be Set” misconfiguration for GCP using GCP CLI, you can follow the below steps:
-
Open the Google Cloud SDK Shell or any other terminal of your choice.
-
Run the following command to set the worker pool teardown policy to “delete”:
gcloud container node-pools update [POOL_NAME] --cluster=[CLUSTER_NAME] --workload-metadata=GKE_METADATA --teardown-policy=delete
Note: Replace [POOL_NAME] with the name of the node pool that you want to update and [CLUSTER_NAME] with the name of the cluster that the node pool belongs to.
-
Once the command is executed successfully, the worker pool teardown policy will be set to “delete”.
-
Verify the changes by running the following command:
gcloud container node-pools describe [POOL_NAME] --cluster=[CLUSTER_NAME] --format="json" | jq '.management.autoRepair'
Note: Make sure to replace [POOL_NAME] and [CLUSTER_NAME] with the actual names.
- If the output of the above command is “true”, then the worker pool teardown policy has been successfully set to “delete”.
By following these steps, you can remediate the “Worker Pool Teardown Policy Should Be Set” misconfiguration for GCP using GCP CLI.
To remediate the “Worker Pool Teardown Policy Should Be Set” misconfiguration in GCP using Python, you can follow the below steps:
-
Install the required libraries:
pip install google-cloud-logging google-auth google-auth-oauthlib google-auth-httplib2
-
Set up authentication to access the GCP project:
from google.oauth2 import service_account credentials = service_account.Credentials.from_service_account_file('path/to/service_account.json')
-
Create a Logging client to access the logs:
from google.cloud import logging_v2 client = logging_v2.LoggingServiceV2Client(credentials=credentials)
-
Define the filter to search for the relevant log entries:
filter_str = 'resource.type="k8s_container" AND log_name="projects/<project_id>/logs/stderr" AND severity="ERROR" AND textPayload:"WorkerPoolTeardownPolicy" AND textPayload:"not set"'
Replace
<project_id>
with your GCP project ID. -
Retrieve the log entries using the filter:
response = client.list_log_entries(filter_=filter_str)
-
For each log entry, retrieve the relevant metadata:
for entry in response: print(f"Log Name: {entry.log_name}") print(f"Resource Type: {entry.resource.type}") print(f"Resource Labels: {entry.resource.labels}") print(f"Severity: {entry.severity}") print(f"Timestamp: {entry.timestamp}") print(f"Message: {entry.json_payload['message']}")
-
For each relevant metadata, remediate the misconfiguration by setting the Worker Pool Teardown Policy:
from google.cloud import container_v1 client = container_v1.ClusterManagerClient(credentials=credentials) project_id = "<project_id>" zone = "<zone>" cluster_id = "<cluster_id>" cluster = client.get_cluster(project_id, zone, cluster_id) # Update the Worker Pool Teardown Policy for pool in cluster.node_pools: pool.management.auto_repair = True pool.management.auto_upgrade = True pool.management.auto_upgrade_maintenance_policy = { "window": { "dailyMaintenanceWindow": { "startTime": "02:00" } } } # Update the cluster with the new configuration update_request = container_v1.types.UpdateClusterRequest(cluster=cluster, update_mask={"paths": ["node_pools"]}) operation = client.update_cluster(update_request)
Replace
<project_id>
,<zone>
and<cluster_id>
with your specific details. -
Verify that the misconfiguration has been remediated by checking the logs again.