Problem updating postgres to 4.52

Hi,

We're facing a problem when doing the upgrade to postgres v14 on apigee 4.52 upgrade. It fails when doing the update of the postgres master server at the point where it does the dump of de DB schemas.

Copying the previous version config file to /opt/apigee/data/apigee-postgresql/pgdata-10.old
Modifying /opt/apigee/data/apigee-postgresql/pgdata/postgresql.conf file with custom setting
Performing Consistency Checks
-----------------------------
Checking cluster versions                                   ok
Checking database user is the install user                  ok
Checking database connection settings                       ok
Checking for prepared transactions                          ok
Checking for system-defined composite types in user tables  ok
Checking for reg* data types in user tables                 ok
Checking for contrib/isn with bigint-passing mismatch       ok
Checking for user-defined encoding conversions              ok
Checking for user-defined postfix operators                 ok
Checking for incompatible polymorphic functions             ok
Checking for tables WITH OIDS                               ok
Checking for invalid "sql_identifier" user columns          ok
Creating dump of global objects                             ok
Creating dump of database schemas
  apigee

*failure*

Consult the last few lines of "pg_upgrade_dump_16400.log" for
the probable cause of the failure.

The error log is the following:

command: "/usr/pgsql-14/bin/pg_dump" --host /opt/apigee/data/apigee-postgresql/pgdata-10.old --port 50432 --username apigee --schema-only --quote-all-identifiers --binary-upgrade --format=custom  --file="pg_upgrade_dump_16400.custom" 'dbname=apigee' >> "pg_upgrade_dump_16400.log" 2>&1
pg_dump: error: query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE "analytics"."noah.live.fact" IN ACCESS SHARE MODE
command: "/usr/pgsql-14/bin/pg_dump" --host /opt/apigee/data/apigee-postgresql/pgdata-10.old --port 50432 --username apigee --schema-only --quote-all-identifiers --binary-upgrade --format=custom  --file="pg_upgrade_dump_16400.custom" 'dbname=apigee' >> "pg_upgrade_dump_16400.log" 2>&1
pg_dump: error: query failed: ERROR:  out of shared memory
HINT:  You might need to increase max_locks_per_transaction.
pg_dump: error: query was: LOCK TABLE "analytics"."noah.live.fact" IN ACCESS SHARE MODE

It seems that we don't have enough shared memory for table locking. We tried to adjust the limit of "max_locks_per_transaction" by editig the file "/opt/apigee/customer/application/postgresql.properties":

conf_postgresql_effective_cache_size = 4096MB
conf_postgresql_shared_buffers = 8192MB
conf_postgresql_work_mem = 2048MB
conf_postgresql_max_locks_per_transaction=150000

But still doesn't work. Any idea of how to solve the problem?

 

Thank you!

Solved Solved
0 1 563
1 ACCEPTED SOLUTION

Hi,

What are the total number of tables in the system? You can find that by 

select count(*)
from information_schema.tables;

As a rough guideline, you should set the conf_postgresql_max_locks_per_transaction and conf_postgresql_max_connections values so that :
(number of tables) < (max_connections) x (max_locks_per_transaction)
Setting max_locks_per_transaction to randomly large numbers might not be the best way to go about it.

The upgrade process internally uses the pg_upgrade script to upgrade the database. This can be found at 

$APIGEE_ROOT/apigee-postgresql/lib/actions/db_upgrade

The max_connections value for the upgrade is hard coded to 10 in this script and suffices most use cases. However, if you have problems because of a high number of tables, you can change this value to a suitable value based on the number of tables.

## Using the utility pg_upgrade to upgrade from $CUR_PG_VERSION to $NEW_PG_VERSION:
time "${PGENGINE}"/pg_upgrade -k -d "${OLD_PGDATA_DIR}" -D "${NEW_PGDATA_DIR}" \
-b "${CUR_PG_BINDIR}" -B "${NEW_PG_BINDIR}" \
--old-options "-c max_connections=10 -c config_file=${OLD_PGDATA_DIR}/postgresql.conf" \
--new-options "-c max_connections=10 -c config_file=${NEW_PGDATA_DIR}/postgresql.conf"


For example, if you have 50000 tables, you can set the max_connections to 100 in this script and set the max_locks_per_transaction to 500 or higher in your postgres configs (/opt/apigee/customer/application/postgresql.properties) . Alternatively you can try removing the max_connections from this script altogether and use the value set in your postgresql.properties.

## Using the utility pg_upgrade to upgrade from $CUR_PG_VERSION to $NEW_PG_VERSION:
time "${PGENGINE}"/pg_upgrade -k -d "${OLD_PGDATA_DIR}" -D "${NEW_PGDATA_DIR}" \
-b "${CUR_PG_BINDIR}" -B "${NEW_PG_BINDIR}" \
--old-options "-c config_file=${OLD_PGDATA_DIR}/postgresql.conf" \
--new-options "-c config_file=${NEW_PGDATA_DIR}/postgresql.conf"



 

 

View solution in original post

1 REPLY 1

Hi,

What are the total number of tables in the system? You can find that by 

select count(*)
from information_schema.tables;

As a rough guideline, you should set the conf_postgresql_max_locks_per_transaction and conf_postgresql_max_connections values so that :
(number of tables) < (max_connections) x (max_locks_per_transaction)
Setting max_locks_per_transaction to randomly large numbers might not be the best way to go about it.

The upgrade process internally uses the pg_upgrade script to upgrade the database. This can be found at 

$APIGEE_ROOT/apigee-postgresql/lib/actions/db_upgrade

The max_connections value for the upgrade is hard coded to 10 in this script and suffices most use cases. However, if you have problems because of a high number of tables, you can change this value to a suitable value based on the number of tables.

## Using the utility pg_upgrade to upgrade from $CUR_PG_VERSION to $NEW_PG_VERSION:
time "${PGENGINE}"/pg_upgrade -k -d "${OLD_PGDATA_DIR}" -D "${NEW_PGDATA_DIR}" \
-b "${CUR_PG_BINDIR}" -B "${NEW_PG_BINDIR}" \
--old-options "-c max_connections=10 -c config_file=${OLD_PGDATA_DIR}/postgresql.conf" \
--new-options "-c max_connections=10 -c config_file=${NEW_PGDATA_DIR}/postgresql.conf"


For example, if you have 50000 tables, you can set the max_connections to 100 in this script and set the max_locks_per_transaction to 500 or higher in your postgres configs (/opt/apigee/customer/application/postgresql.properties) . Alternatively you can try removing the max_connections from this script altogether and use the value set in your postgresql.properties.

## Using the utility pg_upgrade to upgrade from $CUR_PG_VERSION to $NEW_PG_VERSION:
time "${PGENGINE}"/pg_upgrade -k -d "${OLD_PGDATA_DIR}" -D "${NEW_PGDATA_DIR}" \
-b "${CUR_PG_BINDIR}" -B "${NEW_PG_BINDIR}" \
--old-options "-c config_file=${OLD_PGDATA_DIR}/postgresql.conf" \
--new-options "-c config_file=${NEW_PGDATA_DIR}/postgresql.conf"