GKE private cluster and Cloud NAT issue

Hi Team,

I was successful with creating a private cluster. But even though I add cloud NAT to the private subnet associated with the cluster. It does not allow the pod to access the internet.

Also, when i create a the cluster in unable to access via cloud shell. Please let me know if there are is modification that i need to make.

 

gcloud compute networks subnets create subnet-us-central \
   --network custom-network1 \
   --region us-central1 \
   --range 192.168.1.0/24



gcloud container clusters create "nat-test-cluster" \
    --zone "us-central1-c" \
    --cluster-version "latest" \
    --machine-type "e2-medium" \
    --disk-type "pd-standard" \
    --disk-size "100" \
    --scopes "https://www.googleapis.com/auth/compute","https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
    --num-nodes "3" \
    --enable-private-nodes \
    --master-ipv4-cidr "172.16.0.0/28" \
    --enable-ip-alias \
    --network "projects/project-id/global/networks/custom-network1" \
    --subnetwork "projects/project-id/regions/us-central1/subnetworks/subnet-us-central" \
    --max-nodes-per-pool "110" \
    --enable-master-authorized-networks \
    --addons HorizontalPodAutoscaling,HttpLoadBalancing \
    --enable-autoupgrade \
    --enable-autorepair \
    --no-enable-basic-auth \
    --workload-pool=tooljet-us.svc.id.goog

gcloud compute firewall-rules create allow-ssh \
    --network custom-network1 \
    --source-ranges 35.235.240.0/20 \
    --allow tcp:22    


gcloud compute routers create nat-router \
    --network custom-network1 \
    --region us-central1

gcloud compute routers nats create nat-config \
    --router-region us-central1 \
    --router nat-router \
    --nat-all-subnet-ip-ranges \
    --auto-allocate-nat-external-ips

 

 

1 1 224
1 REPLY 1

Hey @adishm 

To access the GKE control plane on private clusters, you need to add a CIDR range or IP address to the master authorized networks. You can add your Cloud Shell IP range to this, but the better practice is to create a separate virtual machine to use as a jump host and allow traffic from there.


For reference, you can look here:
https://cloud.google.com/kubernetes-engine/docs/how-to/authorized-networks#create_cluster

Top Labels in this Space