Triage and Remediation
Remediation
Using Console
Using Console
To remediate the misconfiguration “For Large Clusters L4 ILB Subsetting Should Be Used” for GCP using GCP console, follow the below steps:
- Log in to your GCP console and select the project where the misconfiguration exists.
- Go to the “Kubernetes Engine” section from the main menu.
- Click on the name of the cluster that you want to remediate.
- Click on the “Edit” button.
- Scroll down to the “Networking” section and click on “Advanced options”.
- Under “Load balancing”, select “L4 Internal Load Balancer”.
- In the “Backend configuration” section, click on “Create a new backend configuration”.
- In the “Backend configuration” page, give a name to the backend configuration.
- In the “Backend service” section, select the appropriate service from the dropdown.
- In the “Backend instance group” section, select the instance group that you want to use.
- In the “Health check” section, select the appropriate health check from the dropdown.
- In the “Session affinity” section, select “None”.
- Click on the “Create” button to create the backend configuration.
- Back in the “Load balancing” section, click on “Create a new load balancer”.
- In the “Create a Load Balancer” page, select “Internal” for the “Type” field.
- Give a name to the load balancer.
- In the “Backend configuration” section, select the backend configuration that you just created.
- In the “Frontend configuration” section, select “HTTP(S)” for the “Protocol” field.
- In the “IP address” section, select “Internal IP address”.
- Click on the “Create” button to create the load balancer.
- Wait for a few minutes for the load balancer to be created.
- Once the load balancer is created, go back to the “Kubernetes Engine” section and click on the name of the cluster.
- Click on the “Edit” button.
- Scroll down to the “Networking” section and click on “Advanced options”.
- Under “Load balancing”, select the load balancer that you just created.
- Click on the “Save” button to save the changes.
Using CLI
Using CLI
To remediate the misconfiguration “For Large Clusters L4 ILB Subsetting Should Be Used” for GCP using GCP CLI, you can follow the below steps:
- Open the Cloud Shell in your GCP console.
- Check if you have the latest version of gcloud CLI by running the command:
gcloud components update
. - Run the command
gcloud config set project [PROJECT_ID]
to set the project where the misconfiguration exists. - Run the command
gcloud compute backend-services list
to list all the backend services in the project. - Identify the backend service for which you want to enable L4 ILB subsetting.
- Run the command
gcloud compute backend-services update [BACKEND_SERVICE_NAME] --load-balancing-scheme internal --load-balancing-mode UTILIZATION --max-utilization 0.8 --connection-draining-timeout 300 --session-affinity NONE --health-checks [HEALTH_CHECK_NAME] --enable-logging --global --subnet [SUBNET_NAME] --database-subnet [DATABASE_SUBNET_NAME] --enable-cdn --l4-ilb-subsetting-enabled
to enable L4 ILB subsetting for the backend service.
Using Python
Using Python
To remediate the misconfiguration “For Large Clusters L4 ILB Subsetting Should Be Used” in GCP using Python, follow the steps below:By following these steps, you can remediate the “For Large Clusters L4 ILB Subsetting Should Be Used” misconfiguration in GCP using Python.
- Import the necessary libraries:
- Define the project ID and the zone where the instance is located:
- Create a Compute Engine client:
- Retrieve the instance resource:
- Check if the instance has a network interface:
- Retrieve the network interface:
- Check if the network interface has an attached network endpoint group:
- Retrieve the network endpoint group:
- Check if the network endpoint group has a load balancing scheme of INTERNAL:
- Check if the network endpoint group has a load balancing scheme of INTERNAL_MANAGED:
- Check if the network endpoint group has a load balancing scheme of INTERNAL_SELF_MANAGED:
- Retrieve the health check:
- Update the L4 ILB Subsetting setting:
- Print a success message: