time out occurs creating datastream profile which data source is AlloyDB

Hi everyone.

I'm trying to create datastream profile in terraform, which data source is AlloyDB.
But it fails with the following error messages.

╷
│ Error: Error waiting to create ConnectionProfile: Error waiting for Creating ConnectionProfile: {"@type":"type.googleapis.com/google.rpc.ErrorInfo","domain":"datastream.googleapis.com","metadata":{"message":"We timed out trying to connect to the data source. Make sure that the hostname and port configuration is correct, and that the data source is available.","originalMessage":"timeout expired\n","time":"2023-07-03T17:24:57.012014Z","uuid":"e5cee153-8b4c-40ea-a222-eee20628dc28"},"reason":"CONNECTION_TIMEOUT"}
│ {"code":"VALIDATE_CONNECTIVITY","description":"Validates that Datastream can connect to the source database.","message":[{"code":"CONNECTION_TIMEOUT","level":"ERROR","message":"We timed out trying to connect to the data source. Make sure that the hostname and port configuration is correct, and that the data source is available.","metadata":{"original_error":"timeout expired\n"}}],"state":"FAILED"}
│ 
│ 
│ 
│   with google_datastream_connection_profile.alloydb,
│   on datastream.tf line 4, in resource "google_datastream_connection_profile" "alloydb":
│    4: resource "google_datastream_connection_profile" "alloydb" {
│ 
╵

 

I followed the instructions of this document ,  and this is the code I wrote:

module "gce-container" {
  source  = "terraform-google-modules/container-vm/google"
  version = "~> 2.0"

  container = {
    image = "gcr.io/dms-images/tcp-proxy"
    env = [
      {
        name  = "SOURCE_CONFIG"
        value = "${var.alloydb.hostname}:5432"
      }
    ],
  }
}

resource "google_compute_instance" "ds-tcp-proxy" {
  project      = var.project_id
  name         = "ds-tcp-proxy"
  machine_type = "e2-micro"
  zone         = var.default_zone

  tags = ["ds-tcp-proxy"]

  boot_disk {
    initialize_params {
      image = module.gce-container.source_image
    }
  }

  network_interface {
    network    = google_compute_network.vpc_network.id
    subnetwork = google_compute_subnetwork.tcp_proxy_datastream.id
  }

  can_ip_forward = true

  metadata = {
    gce-container-declaration = module.gce-container.metadata_value
    google-logging-enabled    = "true"
    google-monitoring-enabled = "true"
  }

  labels = {
    container-vm = module.gce-container.vm_container_label
  }

}

resource "google_compute_firewall" "ds_proxy" {
  name    = "ds-proxy"
  project = var.project_id
  network = google_compute_network.vpc_network.id

  allow {
    protocol = "tcp"
    ports    = ["5432"]
  }

  source_ranges = ["10.1.0.0/29"]

  direction = "INGRESS"
  priority  = 1000

  target_tags = ["ds-tcp-proxy"]
}

 

resource "google_datastream_connection_profile" "alloydb" {
  display_name          = "AlloyDB Connection profile"
  location              = var.default_region
  connection_profile_id = "alloydb-connection-profile"
  project               = var.project_id

  postgresql_profile {
    hostname = google_compute_instance.ds-tcp-proxy.network_interface["0"].network_ip
    port     = 5432
    username = sensitive(var.alloydb.username)
    password = sensitive(var.alloydb.password)
    database = "postgres"
  }

  private_connectivity {
    private_connection = google_datastream_private_connection.main.id
  }
}

resource "google_datastream_private_connection" "main" {
  display_name          = "Datastream Private Connection"
  location              = var.default_region
  private_connection_id = "ds-private-connection"
  project               = var.project_id

  vpc_peering_config {
    vpc    = google_compute_network.vpc_network.id
    subnet = "10.1.0.0/29"
  }
}

when using the username and password of `variables.tf`, I can log in with psql.
Also, the region and location is the same in all resources.

Will you kindly tell me what is the problem? 
Thanks.

Solved Solved
1 4 994
1 ACCEPTED SOLUTION

 

Yes, you could try creating a firewall rule that allows TCP traffic on the necessary port(s) from all IP addresses in the subnet where your AlloyDB instance and TCP proxy are located. This should permit connections between your TCP proxy and the AlloyDB instance.

Your rule might look something like this in Terraform:

resource "google_compute_firewall" "ds_proxy_subnet" {
name = "ds-proxy-subnet"
project = var.project_id
network = google_compute_network.vpc_network.id

allow {
protocol = "tcp"
ports = ["5432"]
}

source_ranges = [google_compute_subnetwork.tcp_proxy_datastream.ip_cidr_range]

direction = "INGRESS"
priority = 1000

target_tags = ["ds-tcp-proxy"]
}

This rule allows ingress TCP traffic on port 5432 from all IP addresses in the subnet (google_compute_subnetwork.tcp_proxy_datastream.ip_cidr_range).

Please note that opening up access to your database from all IPs in a subnet may have security implications. You should ensure that your database is secured and only accessible to authorized services and users.



View solution in original post

4 REPLIES 4

Based on the error message you've received and the code you provided, the error seems to be due to a timeout when trying to establish a connection to the data source. The issue may be caused by a variety of factors including incorrect configuration, network issues, or firewall settings. Here are a few things to consider:

  1. TCP Proxy Configuration: Ensure that the TCP proxy is correctly configured. This is a crucial step in enabling Datastream to connect to the AlloyDB instance. In your Terraform script, you're using a Google Cloud Compute instance with a container running the TCP proxy image. Make sure that the SOURCE_CONFIG environment variable for the container is correctly set to the AlloyDB's hostname and port (${var.alloydb.hostname}:5432). As per the Google Cloud documentation, the SOURCE_CONFIG should be set to the IP address and port number of the AlloyDB for PostgreSQL instance​.

  2. Firewall Rules: Ensure that the firewall rules are set up correctly to allow the required traffic. The firewall rule in your script is configured to allow TCP traffic on port 5432 from the IP range 10.1.0.0/29. Make sure that this is the correct range from which Datastream will initiate the connection​.

  3. Database Configuration: Ensure that AlloyDB is properly configured for replication. This includes setting the wal_level configuration parameter to logical, creating a publication for all tables in your database, granting replication privileges to your database user, and creating a replication slot. These steps are crucial for Datastream to be able to connect and replicate the data from AlloyDB​.

  4. Datastream User: Make sure that the user you're using to connect to AlloyDB from Datastream has the necessary permissions. The user should have replication and login privileges, the ability to create a database, and select access to all tables in the schema​.

Hi @ms4446 .
Thanks for the quick reply.

I checked your four things to see, but it seems to be no problem.
Is it better to create tcp ingress rules which allows from resources in the same subnet?

Thanks.

 

Yes, you could try creating a firewall rule that allows TCP traffic on the necessary port(s) from all IP addresses in the subnet where your AlloyDB instance and TCP proxy are located. This should permit connections between your TCP proxy and the AlloyDB instance.

Your rule might look something like this in Terraform:

resource "google_compute_firewall" "ds_proxy_subnet" {
name = "ds-proxy-subnet"
project = var.project_id
network = google_compute_network.vpc_network.id

allow {
protocol = "tcp"
ports = ["5432"]
}

source_ranges = [google_compute_subnetwork.tcp_proxy_datastream.ip_cidr_range]

direction = "INGRESS"
priority = 1000

target_tags = ["ds-tcp-proxy"]
}

This rule allows ingress TCP traffic on port 5432 from all IP addresses in the subnet (google_compute_subnetwork.tcp_proxy_datastream.ip_cidr_range).

Please note that opening up access to your database from all IPs in a subnet may have security implications. You should ensure that your database is secured and only accessible to authorized services and users.



Hi, @ms4446 .

Thanks for your replying.
I added your firewall rule, and I attached service account to GCE in order to pull Docker images from Container Registry.
And it finally worked out.

Thank you for your support.