A few months ago, Oracle officially confirmed the compatibility of Oracle Database 19c with Oracle.
The alternatives to data streaming that we will be covering in this blog are:
-
WAL File Archiving
-
WAL File Shipping
When and why you should use a data streaming alternative
A real-time data stream may not be a practicable DR solution due to many issues, chief among them being network instability and increased environment interdependency.
Establishing a data stream via an unstable network can be a very frustrating experience, with frequent line drop-outs and reconnects causing unpredictable spikes in data delay to the standby cluster.
If a data stream is configured to run in Synchronous Mode, a write commit on the primary cluster must wait until the transaction has been transmitted to the standby and written to a WAL log there, thereby becoming dependent on the standby environment’s performance. If the standby server starts running slowly, the primary will also slow down while it waits for the standby to catch up – in effect, you have created a negative interdependency between your primary and standby environments.
In either of the cases above, as well as others, a better choice would be to use one of the alternative DR modes described below.
Comparing WAL Archiving and WAL Shipping
Both methods have their benefits. The WAL Archiving method uses PostgreSQL internal methods to copy the WAL files from primary to standby and apply them on the standby to ensure the DR stays in SYNC. The WAL Shipping method on the other hand does not use any internal Postgres parameters, instead, Dbvisit does all the heavy lifting like transferring the WAL files from primary to standby and applying them. Also, ensures the DR cluster is synced with the production cluster at regular intervals.
Let's look at some of the functions and how WAL Archiving and WAL shipping methods handle these:
Function |
WAL Archiving |
WAL Shipping |
DR method |
Internal to Postgres and involves changing parameters in postgresql.conf |
Independent of Postgres |
Does DR creation involve downtime to the Production Cluster? |
Yes, as parameter values are changed in postgresql.conf |
No changes were made to the config file that involved restarting of the production cluster |
Scheduling |
Depends on the value set for archive_timeout when creating a DR |
Automated standby update enabled with a set interval when creating DR |
Sync Method |
WAL files are copied directly from pg_wal directory to standby |
Files are created under the custom backup directory in the primary and copied to the backup directory in the standby |
Parameters changed for the sync process |
Archive_command, restore_command and archive_cleanup_command |
N/A |
Backup method |
Pg_basebackup is used |
Pg_basebackup is used |
Replication slot |
N/A |
A replication slot in created on the primary cluster to stream the files to the backup directory in the primary and from there transferred to the standby backup directory |
Scheduling interval |
Cannot be changed once set without restarting the primary |
Can be changed anytime |
Now, let's look at how to create the DR cluster using Dbvisit for both WAL Archiving and Shipping methods.
Overview of setting up Disaster Recovery
WAL Archiving Setup
The standby cluster’s data and configuration directories can be different from the production cluster but it is very important to ensure that these directories are created with necessary permissions before moving forward. The most important point is to also ensure that the tablespace directories are created on standby with standard permissions (drwx------.)
The full base backup location provided should have enough free space to accommodate the backup files. This method backs up the following files:
-
-
base.tar: PostgreSQL base files, including settings, etc.
-
pg_wal.tar: wal data files for standby to restore.
-
tblspc_xxxxx.tar: customized table space files.
-
backup_manifest: backup metadata, is used for pg_verifybackup to verify the backup.
-
We also back up the postgresql.conf, pg_ident.conf and pg_hba.conf
The archive timeout parameter will decide the interval for generating the WAL files under pg_wal directory and also the transfer from primary to standby. The default value is 60 sec.
After this, these files are transferred to standby and untarred to their locations, the standby cluster is started in read-only mode.
The most important point to remember is that the “primary cluster will be restarted” irrespective of whether the archive mode is on or off. This is to ensure that the archive_command is updated with a value that is specific to Dbvisit.
The archive_command parameter below shows an example of a command that Dbvisit StandbyMP will write. You do not need to create this command yourself, we will do it all for you.
This command will pick up the WAL file from the source and transfer it to the target directory in the standby server. The dbvpgarchive executable is specific to Dbvisit alone and this will ensure that the files are transferred to standby.
There are two more important commands in addition to the archive_command. The restore_command will ensure that the WAL files are restored from the standby’s backup directory to bring the standby in sync with production.
The dbvpgrestore executable is similar to the archive executable and is unique to Dbvisit and is used for restoring the WAL files on standby.
The archive_cleanup command is mainly used to clean up the WAL files that are no longer required by PostgreSQL.
Once the DR is created, the dashboard has the below status:
Any data changes happening in production will clearly show the sync status in the dashboard.
To verify this directly from the psql.
From the primary:
From the standby, you can tail the PostgreSQL log file to check the restoration:
The advantages of using Dbvisit to set up DR with WAL Archive method include:
-
All the above steps are performed by Dbvisit.
-
The main challenge is to ensure that the archive and restore commands are set up without errors. Dbvisit ensures that is done and avoids restarting of primary cluster multiple times.
-
The cleanup of the WAL files after applying on standby happens automatically.
-
No additional configuration changes are required to monitor or manage the DR.
WAL File Setup
This method is the simplest way to create DR for PostgreSQL using Dbvisit. There are no parameter changes required on the primary cluster nor do we need to restart the cluster for the changes to take effect. Dbvisit syncs the DR with the primary at regular pre-set intervals, which can easily be changed at any time.
The base backup taken is similar to the WAL Archiving method but with one change: a replication slot is created on the primary to keep track of the backup files and also establishes a connection locally to write the WAL records on the custom backup locations and then transfer the files from primary to standby.
The next important step is to decide the sync interval the WAL files are created on the primary, transferred to standby and then restored there to bring the standby in sync with the primary. The default interval is set to 300sec but can be modified at any time.
As you can see the setup process is straightforward. It's easy to manage the WAL Shipping environment without having to consider any changes to the PostgreSQL configuration changes. The key benefits being
-
Dbvisit takes care of the entire setup process without the need to change any Postgres parameter changes on the primary.
-
The replication slot helps with streaming the WAL records to the backup directory on the primary.
-
Schedules can be changed easily with automated standby updates.
Concluding comments:
With Dbvisit StandbyMP it's simple to set up either WAL Archiving or WAL Shipping environments. What’s more, all of these environments can be monitored and managed using the same pane of glass and commands – simplifying life for DBAs!
Dbvisit StandbyMP provides the following benefits, no matter what DR sync method you choose:
-
The steps are straightforward, guided and fast.
-
Very few prerequisites for installation.
-
Easy and faster to manage multiple environments.
-
Better DR status visibility by enabling a check of all configurations from the central dashboard.
-
Better DR status visibility with daily heartbeat notifications.
-
Immediate notifications for any issues in your sync status directly from GUI, Slack, and email.
-
No manual intervention or monitoring is required.
Subscribe to our monthly blog updates
By subscribing, you are agreeing to have your personal information managed in accordance with the terms of DBVisit's Privacy Policy