Migrating Plausible Analytics to a New Server
A step-by-step guide to migrating Plausible Analytics running inside Docker to a new server
Plausible Analytics is a simple, open-source web analytics tool designed to provide essential website metrics without compromising on user privacy. Unlike traditional analytics platforms, Plausible focuses on simplicity and ease of use, offering clear insights without tracking personal data or using cookies.
At Visini AG, we self-host Plausible Analytics for us and our clients. This guide explains how we migrated Plausible from a multi-service server with virtual machines (VMs) to a dedicated machine. The new machine will be hosting Plausible exclusively via Docker, allowing for easier maintenance and version upgrades. In the old as well as the new setup, Plausible and the associated services (Postgres, ClickHouse) are run with Docker Compose.
Create a Backup of Plausible Data
Backing up Postgres and ClickHouse data ensures we preserve customer site configurations and historical analytics, which can be restored on the new machine. Data stored in Postgres includes settings and metadata, such as the sites and their configurations. ClickHouse stores the actual analytics data, such as page views and unique visitors.
Postgres Backup
First, let’s create a dump of the Postgres database. We need to connect to the Postgres container to create the dump. Let’s check which databases exist in the Postgres container, via the following command (replace the example container ID a52ab8083b6b
with the one of your Postgres container):
This should provide an output similar to the following:
Looks like we need to create a dump of plausible_db
in order to save the configuration data. Let’s create a dump of the entire database, like so:
Now let’s copy the dump from the virtual machine to the VM host, and then to the local machine:
After completing the Postgres backup, the next step is to back up ClickHouse, which stores the analytics data.
ClickHouse Backup
Next, we need to create a dump of the ClickHouse data. ClickHouse stores the actual analytics data, such as page views and unique visitors. The relevant tables are ingest_counters
, sessions_v2
, and events_v2
.
Note: In case you’ve imported data from Universal Analytics (UA), please also migrate the imported_*
tables accordingly (not covered in the steps below). See a list of all tables via the following command (replace the example container ID aed6425a6303
with the one of your ClickHouse container):
Running the command show tables
should provide an output similar to the following:
Let’s focus on the ingest_counters
, sessions_v2
, and events_v2
tables. We can create dumps of these tables by running the following commands (replace the example container ID aed6425a6303
with the one of your ClickHouse container):
Now let’s copy these files from the virtual machine to the VM host, and then to the local machine:
With the backups of the Postgres and ClickHouse data complete, we can now proceed to import them into the new server.
Import Backup Data into New Server
Now that we have the dumps of the Postgres and ClickHouse data, we can import them into the new machine, which should be running Plausible Analytics inside Docker. We first import the Postgres data, followed by the ClickHouse data.
Import Postgres Backup
First, let’s copy the dump of the Postgres database to the new server:
Then we can import the dump into the Postgres container. First, connect to the Postgres container to import the dump. Stop the Plausible container to avoid conflicts. Drop the existing database, then create a new one for a fresh start. Use the following commands (replace the example container ID 5ab0dabcbaa4
with the one of your Postgres container, and 1455b8caae1c
with the one of your Plausible container):
Now the Postgres data has been imported into the new server. We can proceed to import the ClickHouse data.
Import ClickHouse Backup
Next, we need to import the ClickHouse data. We can copy the ClickHouse dumps to the new server:
Then we can import the dumps into the ClickHouse container. We need to connect to the ClickHouse container to import the dumps. We truncate the existing tables and then import the data from the dumps. Use the following commands (replace the example container ID e92e926fb935
with the one of your ClickHouse container):
Now the ClickHouse data has been imported into the new server. We have successfully migrated Plausible Analytics to the new server.
Disable ClickHouse-internal Logging
In order to keep the ClickHouse-internal data (such as the trace_log
and metric_log
tables) from growing indefinitely, we can disable unnecessary logging.
Mount the following configuration file (clickhouse-config.xml
) to the ClickHouse container:
We can additionally disable logging of queries and query threads via clickhouse-user-config.xml
, which too should be mounted to the ClickHouse container:
Clean up ClickHouse Data (Optional)
ClickHouse-internal data may also be cleaned up manually. This might be required periodically, especially in case logging is not disabled as described above, and the data grows to an unmanageable size (which happened in our case on the old server). We can connect to the container and run clickhouse-client
to check the size of all tables, via the following command:
Then we can run this query to get the size of all tables:
This should provide an output similar to the following:
Wow! Almost 10GB of logs accumulated over the past 12 months. We can delete data from these tables by running the following queries:
Conclusion
In this guide, we’ve covered the process of migrating Plausible Analytics to a new server. We created dumps of the Postgres and ClickHouse data, and then imported them into the new server. We also disabled ClickHouse-internal logging and cleaned up old data to ensure the system runs smoothly and doesn’t accumulate unnecessary data.
These steps will help you migrate Plausible Analytics seamlessly to a new server, ensuring no loss of configurations or historical data.