Random 502 responses from ApigeeX

We're experiencing random 502 responses from Apigee that weren't there before we started proxying the traffic.

Those 502's are not caught up by Apigee debug and are mostly appearing during traffic spikes.

Apigee is never reaching backend server from what I can see and timeouts are set up correctly.

the path is external LB -> Apigee -> internal LB (managed by GKE Gateway API) -> k8s pod.

There are no signs of those 502 in the logs of internal LB that I’ve managed to pull out.

Any suggestions on how to debug this issue or anyone has faced something similar ?

 

Example log (from apigee LB as there are no logs for that on the backend LB):

Logging is enabled for the backend LB with sample rate 1, but it's just not there
{
"insertId": "1wtog76fv7wi0l",
"jsonPayload": {
"statusDetails": "response_sent_by_backend",
"backendTargetProjectNumber": "projects/<masked>",
"cacheDecision": [
"RESPONSE_HAS_CONTENT_TYPE",
"REQUEST_HAS_AUTHORIZATION",
"CACHE_MODE_USE_ORIGIN_HEADERS"
],
"@type": "type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry",
"remoteIp": "<masked>"
},
"httpRequest": {
"requestMethod": "POST",
"requestSize": "688",
"status": 502,
"responseSize": "257",
"userAgent": "Go-http-client/2.0",
"remoteIp": "<masked>",
"serverIp": "masked",
"latency": "0.011870s"
},
"resource": {
"type": "http_load_balancer",
"labels": {
"zone": "global",
"forwarding_rule_name": "apigee-xlb-forwarding-rule",
"backend_service_name": "apigee-xlb-backend",
"target_proxy_name": "apigee-xlb-target-proxy",
"project_id": "maskedxxxxxx-prod",
"url_map_name": "apigee-xlb"
}
},
"timestamp": "2024-02-13T13:40:16.317277Z",
"severity": "WARNING",
"logName": "projects/maskedxxxxxx/logs/requests",
"trace": "projects/maskedxxxxxx/traces/db56cc35a2423d7423320b861500762a",
"receiveTimestamp": "2024-02-13T13:40:16.697784853Z",
"spanId": "ec9f533d40095e3e"
}
3 1 121
1 REPLY 1

Hello @sijohnmathew, were you able to resolve this issue?