Skip to content
Try for FREE
Blog

Using Archive Log Apply Delay on your standby database

Here's a quick introduction an underutilized StandbyTM functionality: the ability to delay the application of archive logs for a specified amount of time.

massimiliano-morosinotto-AKW2gWJC9mM-unsplash-1
oracle archive logs
technical pieces
By Jan Klinke
|
August 21, 2020
|
Link Clicked!

Using Archive Log Apply Delay

In this blog, I would like to introduce you to an often unused functionality of Standby™️ - the ability to delay the application of archive logs for a specified amount of time, which can be useful to protect against human error.

Using Standby™️ gives you the unique ability to synchronize your standby database up to a specific SCN, and therefore timestamp. As a result, it is possible to have your standby database lag behind your primary for a specific, pre-determined number of minutes. This is not possible when using other backup or disaster recovery solutions, such as block-based replication.

 
1. Why Use Delay?

The obvious question is why would you want to delay your archive log apply on the standby database at all? Normally, a standby database will protect your primary against a data center outage, and other hardware failures, but it may not protect you in the case of human error, such as when someone accidentally drops a critical table on production. For example, it is quite common for even experienced DBAs to forget to use (or use an incorrect) WHERE clause when deleting rows. There also may be problems with incorrect syntax when using a data pump import, which can then invisibly delete existing data, and so on.

The problem with this type of data deletion or corruption is that, typically, the Dbvisit daemon will synchronize your primary database with standby almost in real time - so any DML or DDL change, including such accidental deletions, is very quickly propagated to your standby database.

But when you set up and use an Archive Log Apply Delay, you will have time to react to such unexpected events. If someone drops a critical table on production, you will have plenty of time to prevent this change from being propagated to the standby database. Now, instead of a lengthy (and potentially tricky) process of extracting a single table, or a few rows, from an earlier full database backup, you would have immediate access to your data via the standby database.

 
2. How to Set Up an Apply Delay

Luckily, this is very easy to do. It is only necessary to change a single parameter in the DDC file on the Primary side, like so:

APPLY_DELAY_LAG_MINUTES = 5

If you are using an up-to-date Standby™️ version 9 release, there’s nothing else to do – the change will be propagated to both the primary and standby daemons upon the next log shipping operation. You can check that it has taken effect when you try to manually apply an archive log on the standby database:

===========================================================

Standby™️ Database Technology (9.0.16) (pid 121711)

dbvctl started on czlin0192: Mon Sep 14 13:31:21 2020

===========================================================




>>> Applying Log file(s) from czlin0191 to DBVONE on czlin0192:




    Taking into account APPLY_DELAY_LAG_MINUTES=5

    thread 1 sequence 1145 (1_1145_1047632729.arc)... done

    Last applied log(s):

    thread 1 sequence 1145




    Next SCN required for recovery 11806140 generated at 2020-09-14:13:27:11 +02:00.

    Next required log thread 1 sequence 1145




===========================================================

dbvctl ended on czlin0192: Mon Sep 14 13:31:39 2020

===========================================================

The key is to see “Taking into account APPLY_DELAY_LAG_MINUTES=5” in the output. In this example, I have set up a delay of only five minutes, but in reality, it makes more sense to set this to 30-60 minutes, as you’ll want enough time to react to any potential accident on the primary.

 
3. How to Use Delay

Once a delay is set, you are able to handle all those unexpected situations on the primary database when someone makes a mistake. Here is an example of just such a situation:

  1. I set APPLY_DELAY_LAG_MINUTES = 15 in my primary DDC, so when applying archive logs on the standby server my standby database will always be 15 minutes behind the current time:
=======================================================

Standby™️ Database Technology (9.0.19) (pid 121896)

dbvctl started on czlin0192: Mon Sep 14 13:44:13 2020

=======================================================



>>> Applying Log file(s) from czlin0191 to DBVONE on czlin0192:



    Taking into account APPLY_DELAY_LAG_MINUTES=15

    thread 1 sequence 1145 (1_1145_1047632729.arc)... done

    Last applied log(s):

    thread 1 sequence 1145




    Next SCN required for recovery 11806259 generated at 2020-09-14:13:29:24 +02:00.

    Next required log thread 1 sequence 1145



=======================================================

dbvctl ended on czlin0192: Mon Sep 14 13:44:23 2020

=======================================================

Note these two lines:

dbvctl started on czlin0192: Mon Sep 14 13:44:13 2020

....

Next SCN required for recovery 11806259 generated at 2020-09-14:13:29:24 +02:00.

As you can see, my standby database is 15 minutes in the past, (the “now” timestamp is technically not 100% correct, because we would need to take into account the logswitch frequency on the primary database, but it is acceptable for the purposes of this article).

  1. Now suppose our imaginary DBA makes a mistake on the primary database, and at 2020-09-14:13:46 he or she accidentally deletes all rows from the production table APPDATA:
  2. The table on the primary database is now empty, but the table APPDATA on the standby database is still intact. So we can use the standby table to restore data on the primary.
  3. First we need to stop the log apply on the standby database completely, to ensure that the deletion change from the primary database will not get replicated. This is best done by stopping the daemon.
  4. Next, we’ll open our standby database in read only mode:
    $ ./dbvctl -d DBVONE -o open

    ===================================================

    Standby™️ Database Technology (9.0.16) (pid 122010)

    dbvctl started on czlin0192: Mon Sep 14 13:55:05 2020

    ===================================================







    >>> Running pre-checks please wait... done

    Open Standby™️ instance DBVONE in READ ONLY mode...

    Standby™️ instance DBVONE on czlin0192 opened in READ ONLY mode.

    Log files cannot be applied to Database while in READ ONLY mode.

    Database tempfile(s) may need to be added to this database.




    ===================================================

    dbvctl ended on czlin0192: Mon Sep 14 13:55:09 2020
    1. We need to get the data from the deleted table back onto our primary. This can be done easily by creating a database link from our primary to our standby database, and inserting the missing data (you’ll need to have a working TNS alias on primary to standby database - in this case the TNS alias is called “STANDBY”):
      SQL> create database link HELPME connect to system identified by sys using 'STANDBY';



      Database link created.



      SQL> insert into APPDATA (select * from APPDATA@HELPME);



      512 rows created.
      1. And just like that, our data is back on the primary database. Archive log apply delay saved the day! Now we can re-enable the daemon on standby, and resume the log apply process as normal

Conclusion

Setting an archive log apply delay for your standby database is extremely easy, and can be a life-saver depending on the threats and expectations unique to your environment. It can provide you with significant protection against human error. 

Jan Klinke
Jan Klinke


Subscribe to our monthly blog updates

By subscribing, you are agreeing to have your personal information managed in accordance with the terms of DBVisit's Privacy Policy


Try StandbyMP for free

See for yourself how our continuous database protection can help keep your world in motion.

Find a local partner

We work with partners around the world to give you the best advice and world-class local support.

Mask Group 59
Mask Group 184-1
Group 781
Group 528

Let’s talk

Find out about our continuous database protection for yourself and see how you can keep your world in motion.