Preparing PostgreSQL for monitoring
Published 28 March 2024
PostgreSQL provides detailed monitoring information using extensions that are not typically enabled and configured by default. Therefore, before connecting Redgate Monitor to your PostgreSQL instance, there are two areas that need to be configured first, regardless of whether you self-host your database or use a cloud provider.
This page lists concrete steps to get a PostgreSQL instance ready for monitoring. The setting values shown are a good general starting point. However, some suggested values make tradeoffs between data richness and performance overhead, so may not be the correct balance in all cases. We recommend you read the additional guidance on configuration choices to further understand what each setting does and if it makes sense to modify some of them for your specific environment.
Configure extensions, logging, and other settings (steps 1-3)
Step 1: Enable extensions and logging
Change the following configuration settings in the postgresql.conf
file to enable essential extensions and logging:
shared_preload_libraries = 'pg_stat_statements,auto_explain' logging_collector = on log_destination = 'csvlog'
Step 2: Tune data collection
Change the following configuration settings in the postgresql.conf file to tune what data is captured, adapting them as needed based on the guidance. Pay special attention to the autoexplain.log_min_duration
and autoexplain.sample_rate
values, which are important for getting the most value from query execution monitoring.
#Essential auto_explain.log_format = json auto_explain.log_level = LOG #Throttle which explain plans are recorded auto_explain.log_min_duration = 2000 auto_explain.sample_rate = 1.0 #Configure detail level auto_explain.log_verbose = true auto_explain.log_settings = true auto_explain.log_nested_statements = true auto_explain.log_analyze = true auto_explain.log_buffers = true auto_explain.log_wal = true auto_explain.log_timing = true auto_explain.log_triggers = true #Capture io performance track_io_timing = true #Ensure the pg_stat_statements extension captures data pg_stat_statements.track = top
Step 3: Restart to apply changes
Restart the PostgreSQL instance for those changes to take effect:
This is the only time a restart is required, but in some environments, particularly those without High Availability configured, an appropriate maintenance window may be needed. Note that while changes made in Step 1 can only be applied by restarting the PostgreSQL instance, changes in Step 2 can be applied without a restart by executing pg_reload_conf()
, providing a way to iteratively improve those settings without additional downtime.
Connect to the host over an SSH terminal and run:
sudo systemctl restart postgresql
- In the AWS console, navigate to the RDS service, ensuring you are viewing the correct AWS region.
- In the left-hand navigation, select the Database menu item to display a list of instances.
- Select the instance you wish to restart.
- Click the Actions menu item, and choose Reboot.
Note that for Amazon Aurora, each instance in the cluster must be rebooted.
More guidance on rebooting approaches for Amazon Aurora is available here.
Grant access and permissions (steps 4-6)
Step 4: Create monitoring database and role
As a superuser, connect to your PostgreSQL instance and run the following queries to create a utility database alongside a login for monitoring with basic permissions applied.
--Create the utility database CREATE DATABASE redgatemonitor; --Create the user with basic monitoring permissions CREATE USER redgatemonitor WITH PASSWORD 'Y0uRp@s$w0rD'; GRANT pg_monitor TO redgatemonitor; GRANT ALL PRIVILEGES ON DATABASE redgatemonitor TO redgatemonitor;
Step 5: Configure and grant access to richer monitoring data
As a superuser, connect specifically to the "redgatemonitor" utility database you created above, and run the following script to provide access to query and log data via extensions, being careful to choose the right platform:
CREATE EXTENSION IF NOT EXISTS pg_stat_statements; CREATE EXTENSION IF NOT EXISTS file_fdw; CREATE SERVER sqlmonitor_file_server FOREIGN DATA WRAPPER file_fdw; GRANT pg_read_server_files TO redgatemonitor; GRANT EXECUTE ON FUNCTION pg_catalog.pg_current_logfile(text) TO redgatemonitor; GRANT USAGE ON FOREIGN SERVER sqlmonitor_file_server TO redgatemonitor;
CREATE EXTENSION IF NOT EXISTS pg_stat_statements; CREATE EXTENSION IF NOT EXISTS log_fdw; CREATE SERVER sqlmonitor_file_server FOREIGN DATA WRAPPER log_fdw; GRANT EXECUTE ON FUNCTION list_postgres_log_files() TO redgatemonitor; GRANT EXECUTE ON FUNCTION create_foreign_table_for_log_file(text, text, text) TO redgatemonitor; GRANT USAGE ON FOREIGN SERVER sqlmonitor_file_server TO redgatemonitor;
Step 6 (optional): Provide access to vacuum health data
Vacuum health information can only be collected for tables to which Redgate Monitor has read access. There are two ways to provide this access, depending on which version of PostgreSQL you are using.
This step can be skipped, but at the expense of collecting information about vacuum health.
As a superuser, connect to any database and run:
GRANT pg_read_all_data TO redgatemonitor;
As a superuser, connect in turn to each database which you want Redgate Monitor to capture vacuum health data from. Then run the following for any Schemas or Tables in that database for which you want Redgate Monitor to collect vacuum health data:
Either (collect data for all tables within schema)
--Grant permissions for an entire schema within the database: GRANT SELECT ON ALL TABLES IN SCHEMA myschema TO redgatemonitor;
Or (collect data for only specific tables)
--Or alternatively selectively grant permissions for an individual table: GRANT SELECT on myschema.mytable to redgatemonitor;