Guidance for PostgreSQL configuration choices

This guidance is intended to accompany the configuration steps listed here as explanations for why some settings or permissions are required, as well as to explain the suggested trade-offs which have been made in some cases. You should carefully understand the implications of these configuration changes, and make specific choices which are suitable based on your own assessment of your particular environment's needs.

Configure extensions, logging, and other settings (steps 1-3)

Step 1: Enable extensions and logging

Unlike some other database platforms, PostgreSQL uses optional extensions to provide much of its diagnostic information. Redgate Monitor uses two extensions which are packaged with PostgreSQL, but aren't enabled by default:

  • pg_stat_statements provides information about executed queries.
  • auto_explain captures explain plans (execution plans) for queries to understand their performance.

Extensions are enabled by adding them to the shared_preload_libraries configuration setting, which requires a comma-separate list of extensions. Note that other extensions already in this list should be kept, not replaced.

Redgate Monitor also requires logging on the instance to be enabled, and written in csv format. This requires setting the logging_collector  settings to on, and including 'csvlog' in the log_destination  setting. PostgreSQL can write logs in different formats concurrently - for example setting log_destination to 'stderr,csvlog' writes a raw text version of the logs alongside the csv file. Redgate Monitor only consumes csv-based log files, but if it's useful to retain raw text (or other format) logs, this is an option - the important thing for Redgate Monitor is that csvlog be in the list.

Step 2: Tune data collection

Tuning configuration parameters helps achieve the right balance between performance overhead vs capturing rich information.

Most of the suggested configuration settings control the auto_explain extension, which has three types of setting:

  • Capturing format. You must set log_format to json and log_level to LOG to record data in a format Redgate Monitor can consume.
  • Throttling. Logging explain plans for every execution would be very expensive, so log_min_duration allows queries faster than x miliseconds to be ignored. For busy workloads, setting sample_rate to a value from 0-1 captures only a random subset of executions, which can be helpful. Optimal values of these settings is workload-dependent, but a good choice for many would be a threshold around 2 seconds with no sampling.
  • Information detail. Eight settings control what data is captured in an explain plan - each providing helpful richness, at the cost of additional performance overhead. For most workloads, enabling all of these is reasonable; log_timing is most prone to imposing performance overhead, so it's worth consider disabling that option.
SettingSuggestedGuidance
auto_explain.log_formatjsonEssential.
auto_explain.log_levelLOGEssential.
auto_explain.log_min_duration2000Pick a value (in milliseconds) where queries above that duration may be worth investigation. If picking a low value for this setting that results in very large numbers of executions falling within that threshold, consider using the sample_rate setting to avoid capturing the explain plan for every individual execution.
auto_explain.sample_rate1.0Set to a value between 0.0 and 1.0 to control what fraction of queries exceeding the time in log_min_duration have explain plans recorded. Consider setting this to a fraction substantially below 1.0 if monitoring high throughput systems where you want to use a low log_min_duration value.

auto_explain.log_verbose

trueProvides more detailed information on explain plans, in particular including output column lists among other information.

auto_explain.settings

POSTGRESQL 12+ ONLY

trueProvides details of any PostgreSQL configuration settings with non-default values which affected query planning.

auto_explain.log_nested_statements

trueRecords the plan for any nested statement invoked by a function call.
auto_explain.log_analyzetrue

Provides detailed runtime information, at the expense of instance performance. Required for certain other settings to take effect.

auto_explain.log_bufferstrue

(only takes effect when log_analyze is enabled)

Provides information about the numbers of data blocks read, hit, dirtied and written. If track_io_timing is enabled (see below), this also captures timing information to access those blocks.

auto_explain.log_wal

POSTGRESQL 13+ ONLY

true

(only takes effect when log_analyze is enabled)

Provides information about Write Ahead Log usage, including the amount of WAL data generated.

auto_explain.log_timingtrue

(only takes effect when log_analyze is enabled)

Provides node-level execution timing information. This capability is often the most significantly performance-impacting when recording explain plans, so consider disabling it in more critical scenarios, particular if specific timing information isn't of high importance compared to information about eg row counts (which can be recorded more cheaply).

auto_explain.log_triggerstrue

(only takes effect when log_analyze is enabled)

Provides statistics for executed triggers.

More detailed information on these options is available in the PostgreSQL documentation.

Additionally, by default PostgreSQL doesn't record timing information about disk io because of the performance overhead. However, because it can be extremely useful in investigating performance issues, we recommend it be enabled in most situations via the track_io_timing  option.

Finally, the pg_stat_statements extension can be configured to track queries at different levels through the pg_stat_statements.track setting (either none to capture no data, top to capture top-level statements, or all to also capture any child statements called within functions or similar). We recommend this be set to top, though all is also an acceptable value - it must not be set to none or no query information will be available.

Step 3: Restart to apply changes

The changes described in Steps 1 and 2 do not take effect immediately. Instead:

  • Changes to enable extensions and logging (described in Step 1) only take effect after a full service restart, which should be performed during a suitable maintenance window.
  • Changes to tune the other data collection settings (described in Step 2) are dynamic - while they do take effect after a full service restart, they can also be applied without having to restart the PostgreSQL instance by executing pg_reload_conf() as a superuser.

When following all of Steps 1-6 from start to finish, a single restart in this Step 3 is sufficient as it will apply the changes made in both Steps 1 and 2. However, if you make later refinements only to the settings described in Step 2, executing pg_reload_conf() is a helpful in allowing you to apply changes without having to wait for a maintenance period for the brief downtime necessitated by a restart.

Grant access and permissions (steps 4-6)

Step 4: Create monitoring database and role

In PostgreSQL, connections are made to a specific database within an instance, instead of generally to the instance. As such, you'll need to pick a database for Redgate Monitor to log into, through which it will collect information about the entire instance. We recommend you do this by creating a dedicated utility database called "redgatemonitor", in which you will configure things like extensions which provide access to data instance-wide. For example, the pg_stat_statements extension which was enabled globally in an earlier step still has to be installed into a specific database - Redgate Monitor will query pg_stat_statements in that one database but through this it will retrieve information about queries executed against all database on the instance.

The utility database is only used as the route through which Redgate Monitor connects to the PostgreSQL instance - it isn't used to store monitoring data, which is instead stored in Redgate Monitor's own data repository.

If you prefer not to create a dedicated "redgatemonitor" utility database, you can alternatively set up the suggested permissions in an already-existing database, like the default "postgres" database. We strongly recommend you create a dedicated login for monitoring, giving it the minimum necessary permissions described rather than using a superuser account for monitoring.

The pg_monitor role is a standard PostgreSQL role designed to provide access to key monitoring data.

Granting privileges on the dedicated utility database allows Redgate Monitor to interact with various extensions which will be made available on that database in later steps.

Step 5: Configure and grant access to richer monitoring data

Extensions are only available via databases in which they have been installed - this step installs two extensions into the utility database so that Redgate Monitor can access them:

  • pg_stat_statements which was enabled in Step 1 is used to provide information about executed queries.
  • A Foreign Data Wrapper (fdw) provides access to log files via queries against the PostgreSQL database. The specifics of this wrapper are different on each platform - file_fdw is used on Linux environments, while log_fdw is used on Amazon RDS or Aurora, so ensure you pick the right approach for your chosen platform. In the case of the fdw extension, some additional configuration is also performed to make the log files accessible.

Step 6 (optional): Provide access to vacuum health data

Finally, this step allows Redgate Monitor to optionally collect information about vacuum health. To collect vacuum information for a table, the login must have the GRANT SELECT  permission to it (effectively, being able to read data from that table). In PostgreSQL 14+, the pg_read_all_data role provides this permission for all tables (as well as views and sequences) in all databases on the instance - this is the recommended approach. Alternatively, it is possible to provide the login with the GRANT SELECT  permission to individual objects, either at the schema level or to individual tables. If you would prefer to give the "redgatemonitor" login access to this information only for specific tables, or you are monitoring PostgreSQL 13 or earlier, this is the necessary approach.

Because of PostgreSQL permissions model limitations, it isn't possible to collect vacuum health data for tables without granting the SELECT permission on a table, which technically also allows that login to read the table's data. However, Redgate Monitor does not read any of the actual contents of these tables; only information about table size and composition of dead vs alive tuples.

It isn't necessary to provide the permissions in Step 6 - Redgate Monitor will silently ignore vacuum information for any Tables / Schemas / Databases it doesn't have these permissions to, so if providing this permission isn't acceptable, Step 6 can be skipped.


Didn't find what you were looking for?