You can use pgDash with AWS RDS, just like with any other PostgreSQL database server. The following steps should help you get up and running with monitoring your RDS instance with pgDash in just a few minutes.
In a nutshell, it involves setting up a periodic (cron) job on a system with access to the database (typically an EC2 instance in the same subnet) that connects to the RDS instance, collects metrics, and sends them to pgDash.
​pgmetrics is an open-source, command-line tool that collects metrics from your RDS instance. It can display the results in a easy-to-read text format, or as JSON for sending it to other tools. pgmetrics is a single, statically-linked binary, which makes it easy to deploy. It also uses, by design, almost exactly the same arguments as the standard psql
command. It also understands the same environment variables and .pgpass
file that other PostgreSQL CLI tools do.
You can download the latest release of pgmetrics from GitHub here: https://github.com/rapidloop/pgmetrics/releases​
Unpack the release and copy the pgmetrics
binary to an EC2 instance with access to the database.
The pgdash CLI tool is a simple command-line tool to send the output of pgmetrics to pgDash.
You can download the latest release of pgdash CLI from GitHub here: https://github.com/rapidloop/pgdash/releases​
Unpack the release and copy the pgdash
binary to the same EC2 instance as before.
Assume you're able to connect to your database using psql
like this:
~$ psql -h mydbinstance.cipenhlkhjrl.us-east-1.rds.amazonaws.com -U wheel mydbPassword for user wheel:Timing is on.Null display is "~".psql (10.6 (Ubuntu 10.6-0ubuntu0.18.10.1), server 11.1)WARNING: psql major version 10, server major version 11.Some psql features might not work.SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)Type "help" for help.​mydb=>​
You can then run pgmetrics
using the same arguments:
~$ pgmetrics -h mydbinstance.cipenhlkhjrl.us-east-1.rds.amazonaws.com -U wheel mydbPassword:​pgmetrics run at: 24 Mar 2019 7:48:19 AM (14 seconds ago)​PostgreSQL Cluster:Name:Server Version: 11.1Server Started: 24 Mar 2019 7:41:33 AM (7 minutes ago)System Identifier: 6649771538356156598Timeline: 1Last Checkpoint: 24 Mar 2019 7:47:07 AM (1 minute ago)REDO LSN: 0/C000D18Checkpoint LSN: 0/C000D50 (56 B since REDO)Transaction IDs: 562 to 620 (diff = 58)Notification Queue: 0.0% usedActive Backends: 3 (max 87)Recovery Mode? no​[..snip..]
Here wheel
is a user with superuser privileges, but you can use any credentials that have access to the pg_catalog
schema.
You'll need to invoke pgmetrics without it prompting for a password, so that you can automate the collection process. The standard ways all work -- .pgpass
files, using trust
authentication and client certificates. You can even directly supply the password in plain-text, like this:
~$ PGPASSWORD='MY-PASSWORD' pgmetrics -h mydbinstance.cipenhlkhjrl.us-east-1.rds.amazonaws.com -U wheel mydb​pgmetrics run at: 24 Mar 2019 7:54:43 AM (14 seconds ago)​PostgreSQL Cluster:Name:Server Version: 11.1Server Started: 24 Mar 2019 7:41:33 AM (13 minutes ago)System Identifier: 6649771538356156598Timeline: 1Last Checkpoint: 24 Mar 2019 7:52:07 AM (2 minutes ago)REDO LSN: 0/10001230Checkpoint LSN: 0/10001268 (56 B since REDO)Transaction IDs: 562 to 621 (diff = 59)Notification Queue: 0.0% usedActive Backends: 3 (max 87)Recovery Mode? no​[..snip..]
You should now be able to invoke pgmetrics without a password and see the text output. pgmetrics can generate the report in JSON format also, which we need so that we can send it to pgdash. Use the -f json
option for this:
~$ PGPASSWORD='MY-PASSWORD' pgmetrics -h mydbinstance.cipenhlkhjrl.us-east-1.rds.amazonaws.com -U wheel -f json mydb​{"meta": {"version": "1.5","at": 1553394427,"collected_dbs": ["mydb"],"local": false},"start_time": 1553393493,"system_identifier": "6649771538356156598","checkpoint_lsn": "0/140001D0","prior_lsn": "","redo_lsn": "0/14000198","timeline_id": 1,​[..snip..]
To send the report generated by pgmetrics to pgDash, you'll need an API key. If you haven't signed up for pgDash yet, you can do that here. The API key can be found in your profile page.
To actually send the report, simply pipe the output of pgmetrics to the pgdash CLI tool:
pgmetrics -f json {args} | pgdash -a YOUR-API-KEY report RDS-INSTANCE-NAME
for example:
~$ PGPASSWORD='MY-PASSWORD' pgmetrics -h mydbinstance.cipenhlkhjrl.us-east-1.rds.amazonaws.com -U wheel -f json mydb | pgdash -a hkBjMH5qgtzTpPj8LpYQSM report mydbinstance
Note that you should be able to let pgmetrics execute without prompting for a password, otherwise this command will appear to hang (it's actually waiting for a password).
We recommend using the RDS instance name so that it is easy to identify and relate to in the pgDash UI.
You should now see this in the pgDash UI at https://app.pgdash.io/servers:
Tip: If you get a 429 error, it means you're getting rate limited. You'll need to wait 60 seconds at least before trying to send another report.
pg_stat_statements is a very useful extension that is bundled by default in RDS. It provides information about query performance that pgmetrics can collect and report. You can find more information about pg_stat_statements here in the PostgreSQL docs.
Locate and open the parameter group for your RDS instance:
In this case, the group is called "default.postgres11".
Search for shared_preload_libraries and ensure that it has contains the value pg_stat_statements. This might already be enabled in your case. Note that if you had to add it, you'll need to restart your RDS instance because PostgreSQL loads shared libraries only at startup.
Run the "CREATE EXTENSION" command to create the extension in the database. You must do this in each database you need to monitor.
~$ psql -h mydbinstance.cipenhlkhjrl.us-east-1.rds.amazonaws.com -U wheel mydbPassword for user wheel:Timing is on.Null display is "~".psql (10.6 (Ubuntu 10.6-0ubuntu0.18.10.1), server 11.1)WARNING: psql major version 10, server major version 11.Some psql features might not work.SSL connection (protocol: TLSv1.2, cipher: ECDHE-RSA-AES256-GCM-SHA384, bits: 256, compression: off)Type "help" for help.​mydb=> CREATE EXTENSION pg_stat_statements;CREATE EXTENSIONTime: 381.516 msmydb=>​
Finally, you should check if the user which pgmetrics is connecting as has permissions to do a SELECT from the pg_stat_statements view:
mydb=> SELECT * FROM pg_stat_statements LIMIT 1;userid | dbid | queryid | query--------+-------+----------------------+-----------------------------------------------------------------------------------------------10 | 16384 | -3405321599851175834 | select coalesce(max(pg_current_wal_lsn()::pg_lsn - slots.restart_lsn), $1) as slot_byte_lag fr(1 row)​
That's it! pgmetrics will automatically collect SQL query metrics if this extension is present.
pgmetrics can also collect metrics from AWS CloudWatch, for RDS Postgres and RDS Aurora (with Postgres engine) services. This provides information about the VM instance where the database engine is running, including memory and disk usage. pgmetrics will also collect Enhanced Monitoring metrics if you've enabled that in your AWS setup. It will also collect PostgreSQL logs via AWS CloudWatch Logs.
To let pgmetrics collect CloudWatch metrics, also pass the database instance ID of the database being monitored as the --aws-rds-dbid option to pgmetrics:
$ pgmetrics --aws-rds-dbid=mydbinstance-id {args} | pgdash {args}
pgmetrics supports standard methods of accessing the credentials required for calling the AWS APIs. As a best practice, AWS recommends that you specify credentials in the following order:
Use IAM roles for Amazon EC2 (if your application is running on an Amazon EC2 instance)
Use a shared credentials file (~/.aws/credentials)
Use environment variables (AWS_REGION, AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and others)
This AWS documentation page has more details.
More information about AWS support can be found in the Integrations page.
Finally, note that AWS will charge you for the API calls made by pgmetrics to collect this information. See the AWS CloudWatch pricing page for more details.