AWS Getting Started
Getting Started with AWS RDS PostgreSQL and AWS Aurora
You can use pgDash with AWS RDS and Aurora, just like with any other PostgreSQL database server.
pgDash supports AWS RDS PostgreSQL and AWS Aurora with a PostgreSQL engine. For AWS Aurora, there is no difference between a reader and writer instance as far as pgmetrics and pgDash are concerned. Serverless and Global versions of Aurora are also supported.
The steps for using pgDash with AWS RDS PostgreSQL and with an Aurora reader/writer instance are the same.
The following steps should help you get up and running with monitoring your RDS instance with pgDash in just a few minutes.
In a nutshell, it involves setting up a periodic (cron) job on a system with access to the database (typically an EC2 instance in the same subnet) that connects to the RDS instance, collects metrics, and sends them to pgDash.
1. Get pgmetrics
pgmetrics is an open-source, command-line tool that collects metrics from your RDS instance. It can display the results in a easy-to-read text format, or as JSON for sending it to other tools. pgmetrics is a single, statically-linked binary, which makes it easy to deploy. It also uses, by design, almost exactly the same arguments as the standard psql
command. It also understands the same environment variables and .pgpass
file that other PostgreSQL CLI tools do.
You can download the latest release of pgmetrics from GitHub here: https://github.com/rapidloop/pgmetrics/releases
Unpack the release and copy the pgmetrics
binary to an EC2 instance with access to the database.
2. Get pgdash CLI
The pgdash CLI tool is a simple command-line tool to send the output of pgmetrics to pgDash.
You can download the latest release of pgdash CLI from GitHub here: https://github.com/rapidloop/pgdash/releases
Unpack the release and copy the pgdash
binary to the same EC2 instance as before.
3. Run pgmetrics
Assume you're able to connect to your database using psql
like this:
You can then run pgmetrics
using the same arguments:
Here wheel
is a user with superuser privileges, but you can use any credentials that have access to the pg_catalog
schema.
You'll need to invoke pgmetrics without it prompting for a password, so that you can automate the collection process. The standard ways all work -- .pgpass
files, using trust
authentication and client certificates. You can even directly supply the password in plain-text, like this:
You should now be able to invoke pgmetrics without a password and see the text output. pgmetrics can generate the report in JSON format also, which we need so that we can send it to pgdash. Use the -f json
option for this:
4. Send pgmetrics Report to pgDash
To send the report generated by pgmetrics to pgDash, you'll need an API key. If you haven't signed up for pgDash yet, you can do that here. The API key can be found in your profile page.
To actually send the report, simply pipe the output of pgmetrics to the pgdash CLI tool:
for example:
Note that you should be able to let pgmetrics execute without prompting for a password, otherwise this command will appear to hang (it's actually waiting for a password).
We recommend using the RDS instance name so that it is easy to identify and relate to in the pgDash UI.
You should now see this in the pgDash UI at https://app.pgdash.io/servers:
Tip: If you get a 429 error, it means you're getting rate limited. You'll need to wait 60 seconds at least before trying to send another report.
5. (Optional) Setup the pg_stat_statements Extension
pg_stat_statements is a very useful extension that is bundled by default in RDS. It provides information about query performance that pgmetrics can collect and report. You can find more information about pg_stat_statements here in the PostgreSQL docs.
Locate the Parameter Group
Locate and open the parameter group for your RDS instance:
In this case, the group is called "default.postgres11".
Ensure shared_preload_libraries is present
Search for shared_preload_libraries and ensure that it has contains the value pg_stat_statements. This might already be enabled in your case. Note that if you had to add it, you'll need to restart your RDS instance because PostgreSQL loads shared libraries only at startup.
Create the extension
Run the "CREATE EXTENSION" command to create the extension in the database. You must do this in each database you need to monitor.
Check if a SELECT works for the pgmetrics user
Finally, you should check if the user which pgmetrics is connecting as has permissions to do a SELECT from the pg_stat_statements view:
That's it! pgmetrics will automatically collect SQL query metrics if this extension is present.
6. (Optional) Setup the auto_explain Extension
The auto_explain extension can capture the execution plans of slow queries and log them into CloudWatch Logs. To view these execution plans in pgDash, first setup auto_explain (this step), then let pgmetrics read the logs via the CloudWatch Logs (next step, see below).
To setup auto_explain, edit the parameter group of the database instance and ensure that the value of the configuration setting shared_preload_libraries
includes auto_explain
.
You'll also need to configure the settings of the auto_explain extension. These settings are documented extensively here. Here is a sample configuration:
auto_explain.log_min_duration = 2000
auto_explain.log_analyze = 1
auto_explain.log_timing = 0
auto_explain.log_format = json
With the settings above, any SQL statement that executes for more than 2 seconds will have it's execution plan logged into the PostgreSQL log file in JSON format. Be sure to go through all available settings in the documentation and tweak them to suit your deployment.
Note that if you had to change the value of shared_preload_libraries
, a reboot of the database instance will be required.
7. (Optional) Collect AWS CloudWatch Metrics
pgmetrics can also collect metrics from AWS CloudWatch, for RDS Postgres and Aurora (with Postgres engine) services. This provides information about the VM instance where the database engine is running, including memory and disk usage. pgmetrics will also collect Enhanced Monitoring metrics if you've enabled that in your AWS setup. It will also collect PostgreSQL logs via AWS CloudWatch Logs.
To let pgmetrics collect CloudWatch metrics, also pass the database instance ID of the database being monitored as the --aws-rds-dbid option to pgmetrics:
In case of AWS Aurora, use the database instance ID of the reader or writer instance.
pgmetrics supports standard methods of accessing the credentials required for calling the AWS APIs. As a best practice, AWS recommends that you specify credentials in the following order:
Use IAM roles for Amazon EC2 (if your application is running on an Amazon EC2 instance)
Use a shared credentials file (~/.aws/credentials)
Use environment variables (AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and others)
This AWS documentation page has more details.
More information about AWS support can be found in the Integrations page.
Finally, note that AWS will charge you for the API calls made by pgmetrics to collect this information. See the AWS CloudWatch pricing page for more details.
Last updated