# Automation

You can send pgmetrics reports to pgDash regularly, to let pgDash extract and store metrics from those reports.

Typically, you’ll collect the metrics from all important/interesting/related databases from one server in one go, storing it under one name:

```
pgmetrics {options..} {dbs..} | pgdash prod-23
```

#### **Avoiding Password Prompt**

Running in an automated manner naturally requires avoiding the password prompt that pgmetrics brings up by default. You can use the `--no-password` option of pgmetrics to suppress this prompt, then setup alternate ways to authenticate. Using [.pgpass files](https://www.postgresql.org/docs/current/static/libpq-pgpass.html) to supply the password and/or configuring your [pg\_hba.conf](https://www.postgresql.org/docs/current/static/auth-pg-hba-conf.html) file to not require password for a particular host/user/database combination are common options.

For more information see [pgmetrics invocation options](https://pgmetrics.io/docs/invoke.html) and [client autentication chapter](https://www.postgresql.org/docs/current/static/client-authentication.html) in the Postgres docs.

#### **Collecting and Reporting Metrics Periodically**

How often you collect and report metrics depends on your database activity. We recommend a frequency of 5 minutes or so. The pgDash API is rate limited so that you have to wait a minimum of 60 seconds before reporting again -- the pgdash CLI will report an error code 429 if your request was rejected because of rate limit.

You can run the command above (“pgmetrics | pgdash”) as a cron job, or use a simple script:

```
#!/bin/sh
while true
do
    pgmetrics {options..} {dbs..} | pgdash -a APIKEY report NAME
    sleep 300
done
```

If you're using a cron job with an interval of one minute, remember that it may a few seconds for the job to complete. This might result in the job being invoked again before 60 seconds are up and may result in a rate-limit.
