Over the last few weeks, I have been working with an Oracle 11g RAC primary environment using a single instance environment as a Standby. This is a great way to have a backup environment for your Primary site. When disaster strikes you will have a standby database in place to activate, and in most cases, this will be sufficient. The DBA will failover to the standby environment, and then start working on resolving the issue with the primary site to allow you to switch back to the primary as soon as possible. But what if your standby environment is a RAC-capable environment? You can convert the single instance standby to a RAC-enabled database and get better use of the environment.
Let’s look at an example of where you would be able to do this; a primary database environment running a 2 node Oracle 11g RAC setup, with a secondary site running a 2 node RAC setup (same software versions as primary) used for Disaster Recovery (DR). The standby site will use one instance to perform recovery, and that is why most DBA’s would just opt to set up a single instance standby database environment. However, the end result is that you have a powerful DR site that is not utilized to its full potential.
Another point to note is, while this DR site is running as a standby database environment, it might be idle most of the time. To better utilize this DR site, from both a resource point of view but also in terms of licensing costs, why not offload your test or QA environment here. That way you can perform testing in an environment that is similar to production with the added benefit that you are better utilizing your hardware and licensing, and this can present a huge cost saving. If disaster strikes you can shutdown the Test/QA environment and activate the standby database as a RAC-enabled database. That way your standby system can utilize all the required resources.
In this post, I will show you how you can manually update the single instance standby database to a RAC-enabled database following activation. There are a few pre-requisites for doing this, the main one being that you require a second RAC environment (2 nodes with shared storage). This will be used for the standby database. In this post, I am assuming that your standby environment consists of a 2-node environment with shared storage, and the required Oracle software is already installed. At a later stage, I will post on how to install the RAC components, but here I will focus on converting the single instance standby database to a full-blown RAC database, post-activation. For details on Oracle RAC installation, please see the Oracle documentation.
In my test lab I am using the following:
Primary Environment (2 Nodes with shared storage)
- Oracle Linux 5.7 64 bit installed on two nodes with node 1 as dbvrlin301 and node 2 is dbvrlin302
- Oracle Clusterware 11.2.0.3 64 bit
- Oracle Database 11.2.0.3 64 bit (Standard Edition)
- RAC enabled primary database called TSTRAC with two instances TSTRAC1 (node1) and TSTRAC2 (node2)
- Latest version of Standby™️ Software
Standby Environment (2 Nodes with shared storage)
- Oracle Linux 5.7 64 bit installed on two nodes with node 1 as dbvrlin303 and node 2 is dbvrlin304
- Oracle Clusterware 11.2.0.3 64 bit
- Oracle Database 11.2.0.3 64 bit (Standard Edition)
- Single instance standby database called TSTRAC running on dbvrlin303
- Latest version of Standby™️ Software
The installed components and configurations are identical on both these systems. I am also running multiple RAC databases on these systems, but as mentioned I will focus on the TSTRAC database. Please note I am using Oracle Standard Edition with Standby™️ in this environment.
The Standby database was created using Standby™️ software. The standby environment setup is configured the same as the primary database environment. Two disk groups are used, +DATA (used for data files, controlfiles and set of online redo logs) and +FRA (used for controlfiles, archive logs and set of online redo logs)
My (spfile) parameters used on the primary RAC database are set as follows:
*.archive_lag_target=3600
*.audit_file_dest='/u01/app/oracle/admin/TSTRAC/adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+DATA/tstrac/controlfile/current.260.785745995','+FRA/tstrac/controlfile/current.256.785745995'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_create_online_log_dest_2='+FRA'
*.db_domain='dbvisit.co.nz'
*.db_name='TSTRAC'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=6291456000
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TSTRACXDB)'
TSTRAC1.instance_number=1
TSTRAC2.instance_number=2
*.log_archive_format='%t_%s_%r.dbf'
*.log_checkpoints_to_alert=FALSE
*.memory_target=1073741824
*.nls_territory='NEW ZEALAND'
*.open_cursors=300
*.processes=150
*.remote_listener='dbvrlin-scan.dbvisit.co.nz:1521'
*.remote_login_passwordfile='exclusive'
TSTRAC2.thread=2
TSTRAC1.thread=1
TSTRAC1.undo_tablespace='UNDOTBS1'
TSTRAC2.undo_tablespace='UNDOTBS2'
The parameters for the standby database are listed below:
*.audit_file_dest='/u01/app/oracle/admin/TSTRAC/adump'
*.compatible='11.2.0.0.0'
*.control_files='+DATA/tstrac/controlfile/current.269.792118423','+FRA/tstrac/controlfile/current.532.792118425'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_create_online_log_dest_2='+FRA'
*.db_domain='dbvisit.co.nz'
*.db_name='TSTRAC'
*.db_recovery_file_dest_size=6291456000
*.db_recovery_file_dest='+FRA'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=TSTRACXDB)'
*.log_archive_format='%t_%s_%r.dbf'
*.log_checkpoints_to_alert=FALSE
*.memory_target=1073741824
*.nls_territory='NEW ZEALAND'
*.open_cursors=300
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.undo_tablespace='UNDOTBS1'
So now that you know more about the environment, I will jump straight into it. The first step will be to activate the standby database. In this case I am simulating a primary environment failure so I just activate the standby database, using Standby™️. Once this is done, I will show you the steps you can perform to convert this database into a RAC database with the required components registered with the clusterware.
The steps below show how you can activate the standby database using the Dbvisit dbv_oraStartStop utility. This step can also be performed via the GUI.
These commands are executed on the Standby environment:
oracle@dbvrlin303[/usr/local/dbvisit/standby]: ./dbv_oraStartStop activate TSTRAC1
=============================================================
Standby™️ Database Technology (6.0.36.9642) (pid 26292)
dbv_oraStartStop started on dbvrlin303: Wed Sep 12 16:31:17 2012 ()
=============================================================
Activating means this database will become a Primary Database.
It will no longer be a Standby Database for TSTRAC on dbvrlin301.
Activation cannot be reversed.
=>Activate Standby Database on dbvrlin303? <Yes/No> [No]: Yes
Are you sure? <Yes/No> [No]: Yes
Activating now...
Activate Standby Database TSTRAC...
Standby Database TSTRAC activated.
Shutting down standby Database TSTRAC...
Standby Database TSTRAC shutdown successfully.
Starting Activated Standby Database TSTRAC...
Activated Standby Database TSTRAC started .
File dbv_TSTRAC1.env copied to dbv_TSTRAC1.env.201209121631.
Dbvisit Database configuration (DDC) file dbv_TSTRAC1.env has been updated and variables have been reversed between primary and standby server.
SOURCE=dbvrlin303 DESTINATION=dbvrlin301.
Activation complete. Please ensure a backup of this Database is made.
Old archives from before the activation should be removed to avoid mix-up between new and old archive logs.
If the Standby™️ process is to be reversed, then
Database on dbvrlin301 will need to be rebuilt as a Standby Database.
=============================================================
dbv_oraStartStop ended on dbvrlin303: Wed Sep 12 16:32:12 2012
=============================================================
Below are some queries I ran on the newly activated standby database, which is now seen as a primary database.
SQL> select name, controlfile_type, open_mode, database_role from v$database;
NAME CONTROL OPEN_MODE DATABASE_ROLE
--------------- ------- -------------------- ----------------
TSTRAC CURRENT READ WRITE PRIMARY
SQL> select thread#, status, enabled, instance, sequence# from v$thread;
THREAD# STATUS ENABLED INSTANCE SEQUENCE#
---------- ------ -------- --------------- ----------
1 OPEN PUBLIC TSTRAC 1
2 CLOSED PUBLIC TSTRAC2 1
SQL> select tablespace_name, status, contents from dba_tablespaces order by 1;
TABLESPACE_NAME STATUS CONTENTS
------------------------------ --------- ---------
SYSAUX ONLINE PERMANENT
SYSTEM ONLINE PERMANENT
TEMP ONLINE TEMPORARY
UNDOTBS1 ONLINE UNDO
UNDOTBS2 ONLINE UNDO
USERS ONLINE PERMANENT
Once the standby database is activated it is open for full read/write operations. We are now getting to the steps required for changing the database to a RAC enabled database. There are a few prerequisite steps you need to perform before you can RAC enable this database. As you would have noticed only the one node in the standby environment was configured for the database. It will be required to perform some configuration steps on the second node in preparation for it becoming a second node for this database when it is RAC enabled. As both nodes in this configuration are already configured in a cluster the next steps are easy.
A few basic steps to be performed are described below:
Update the /etc/oratab. This should already be done on the first node, but it is always good to double check and add it if it is not there. Add the required entry on each node. In my environment I have added the following:
Node 1 (dbvrlin303) - /etc/oratab
TSTRAC1:/u01/app/oracle/product/11.2.0/dbhome_1:N
Node 2 (dbvrlin304) - /etc/oratab
TSTRAC2:/u01/app/oracle/product/11.2.0/dbhome_1:N
Node 1 (dbvrlin303):
oracle@dbvrlin303[/u01/app/oracle/product/11.2.0/dbhome_1/dbs]: ls -l orapwTSTRAC1
-rw-r----- 1 oracle oinstall 1536 Sep 12 16:27 orapwTSTRAC1
Node 2 (dbvrlin304):
oracle@dbvrlin304[/u01/app/oracle/product/11.2.0/dbhome_1/dbs]: ls -al orapwTSTRAC2
-rw-r----- 1 oracle oinstall 1536 Sep 12 16:27 orapwTSTRAC2
Node 1 (dbvrlin303):
oracle@dbvrlin303[/u01/app/oracle/product/11.2.0/dbhome_1/dbs]: ls -al initTSTRAC1.ora
-rw-r--r-- 1 oracle oinstall 39 Sep 12 16:26 initTSTRAC1.ora
oracle@dbvrlin303[/u01/app/oracle/product/11.2.0/dbhome_1/dbs]: cat initTSTRAC1.ora
SPFILE='+DATA/TSTRAC/spfileTSTRAC.ora'
Node 2 (dbvrlin304):
oracle@dbvrlin304[/u01/app/oracle/product/11.2.0/dbhome_1/dbs]: ls -al initTSTRAC2.ora
-rw-r--r-- 1 oracle oinstall 39 Sep 12 16:26 initTSTRAC2.ora
oracle@dbvrlin304[/u01/app/oracle/product/11.2.0/dbhome_1/dbs]: cat initTSTRAC2.ora
SPFILE='+DATA/TSTRAC/spfileTSTRAC.ora'
Once I have made the changes above I can now start looking at the database parameters. As mentioned I am using a shared spfile located already on the ASM shared storage. So to enable this database as a RAC database with two instances I need to update and set a few parameters. In short these parameters are listed below, although I do recommend you review the Oracle documentation on each of these parameters.
Note: The environment I am using is 11g (11.2.0.3) other versions might require different changes.
Now some of these parameters need to be set for the specific instances. So after updating them I should end up with the following:
*.cluster_database=true
TSTRAC1.instance_number=1
TSTRAC2.instance_number=2
*.remote_listener='dbvrlindr-scan.dbvisit.co.nz:1521'
TSTRAC2.thread=2
TSTRAC1.thread=1
TSTRAC1.undo_tablespace='UNDOTBS1'
TSTRAC2.undo_tablespace='UNDOTBS2'
To do this I ran the following from node 1 – dbvrlin303:
alter system set cluster_database=true scope=spfile sid='*';
alter system set instance_number=1 scope=spfile sid='TSTRAC1';
alter system set instance_number=2 scope=spfile sid='TSTRAC2';
alter system set remote_listener='dbvrlindr-scan.dbvisit.co.nz:1521' scope=spfile sid='*';
alter system set thread=1 scope=spfile sid='TSTRAC1';
alter system set thread=2 scope=spfile sid='TSTRAC2';
alter system set undo_tablespace='UNDOTBS1' scope=spfile sid='TSTRAC1';
alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='TSTRAC2';
After running the above commands, shutdown and then restart the database instances manually on node 1 (dbvrlin303) and node 2 (dbvrlin304). Example steps that were followed can be seen below:
Node 1 (dbvrlin303)
SQL> alter system set cluster_database=true scope=spfile sid='*';
System altered.
SQL> alter system set instance_number=1 scope=spfile sid='TSTRAC1';
System altered.
SQL> alter system set instance_number=2 scope=spfile sid='TSTRAC2';
System altered.
SQL> alter system set remote_listener='dbvrlindr-scan.dbvisit.co.nz:1521' scope=spfile sid='*';
System altered.
SQL> alter system set thread=1 scope=spfile sid='TSTRAC1';
System altered.
SQL> alter system set thread=2 scope=spfile sid='TSTRAC2';
System altered.
SQL> alter system set undo_tablespace='UNDOTBS1' scope=spfile sid='TSTRAC1';
System altered.
SQL> alter system set undo_tablespace='UNDOTBS2' scope=spfile sid='TSTRAC2';
System altered.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> exit
Disconnected from Oracle Database 11g Release 11.2.0.3.0 - 64bit Production
With the Real Application Clusters and Automatic Storage Management options
oracle@dbvrlin303[/home/oracle]: . oraenv
ORACLE_SID = [TSTRAC] ? TSTRAC1
The Oracle base remains unchanged with value /u01/app/oracle
oracle@dbvrlin303[/home/oracle]: sqlplus /nolog
SQL*Plus: Release 11.2.0.3.0 Production on Wed Sep 12 16:42:27 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2235208 bytes
Variable Size 679478456 bytes
Database Buffers 381681664 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.
Node 2 (dbvrlin304)
oracle@dbvrlin304[/home/oracle]: . oraenv
ORACLE_SID = [TSTRAC2] ? TSTRAC2
The Oracle base remains unchanged with value /u01/app/oracle
oracle@dbvrlin304[/home/oracle]: sqlplus /nolog
SQL*Plus: Release 11.2.0.3.0 Production on Wed Sep 12 16:43:00 2012
Copyright (c) 1982, 2011, Oracle. All rights reserved.
SQL> connect / as sysdba
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 1068937216 bytes
Fixed Size 2235208 bytes
Variable Size 679478456 bytes
Database Buffers 381681664 bytes
Redo Buffers 5541888 bytes
Database mounted.
Database opened.
Following that above, I ran the following queries on the database just to see the current status. This can be executed from either of the nodes, and the output from node 1 (dbvrlin303) is as follows:
SQL> select name, controlfile_type, open_mode, database_role from v$database;
NAME CONTROL OPEN_MODE DATABASE_ROLE
--------------- ------- -------------------- ----------------
TSTRAC CURRENT READ WRITE PRIMARY
SQL> select instance_number, instance_name, host_name, status, thread# from gv$instance;
INSTANCE INSTANCE
NUMBER NAME HOST NAME STATUS THREAD#
-------- --------- ------------------------- ------------ ----------
1 TSTRAC1 dbvrlin303.dbvisit.co.nz OPEN 1
2 TSTRAC2 dbvrlin304.dbvisit.co.nz OPEN 2
SQL> select thread#, status, enabled, instance, sequence# from v$thread;
THREAD# STATUS ENABLED INSTANCE SEQUENCE#
---------- ------ -------- --------------- ----------
1 OPEN PUBLIC TSTRAC1 1
2 OPEN PUBLIC TSTRAC2 2
You will now have two running instances, one on each node. Your single instance database is now a RAC database. But it does not end here. We now need to register the database and instances with the Oracle Cluster Registry (OCR). Once you do this you can make use of the srvctl command to stop and start your RAC database and instances. There are two commands required, one to add the database and then another command (executed twice) to add the two instances.
The commands that will be used to register the new database and instances include the following:
srvctl add database -d <db_unique_name> -o <oracle_home> -c RAC -a "<diskgroup_list>" -p <spfile>
srvctl add instance -d <db_unique_name> -i <inst_name> -n <node_name>
Please note that the examples above do not show all the options. For more details on these commands see the Oracle documentation or just run “srvctl add database –h” or “srvctl add instance –h” to get a full listing. So the next step is to execute these commands. Below is the example of running these commands as well as reviewing the newly added configuration.
oracle@dbvrlin303[/home/oracle]: srvctl add database -d TSTRAC -o /u01/app/oracle/product/11.2.0/dbhome_1 -c RAC -a "DATA,FRA" -p +DATA/tstrac/spfiletstrac.ora
oracle@dbvrlin303[/home/oracle]: srvctl add instance -d TSTRAC -i TSTRAC1 -n dbvrlin303
oracle@dbvrlin303[/home/oracle]: srvctl add instance -d TSTRAC -i TSTRAC2 -n dbvrlin304
oracle@dbvrlin303[/home/oracle]: srvctl config database -d TSTRAC
Database unique name: TSTRAC
Database name:
Oracle home: /u01/app/oracle/product/11.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/tstrac/spfiletstrac.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: TSTRAC
Database instances: TSTRAC1,TSTRAC2
Disk Groups: DATA,FRA
Mount point paths:
Services:
Type: RAC
Database is administrator managed
Once you have completed this you can now use the srvctl command to manage your RAC enabled database. For example, first I did a shutdown of the instances via sqlplus, and below, I show how to start them up again using srvctl (this command only need to be executed on one of the nodes)
oracle@dbvrlin303[/home/oracle]: srvctl status database -d TSTRAC
Instance TSTRAC1 is not running on node dbvrlin303
Instance TSTRAC2 is not running on node dbvrlin304
oracle@dbvrlin303[/home/oracle]: srvctl start database -d TSTRAC
oracle@dbvrlin303[/home/oracle]: srvctl status database -d TSTRAC
Instance TSTRAC1 is running on node dbvrlin303
Instance TSTRAC2 is running on node dbvrlin304
And that is it; you now have a RAC-enabled database. Yes, there are a few extra things you can do such as creating services, but I will leave that for another discussion. As you can see, the steps are not that complex, and hopefully, this post can get you started.
Download a free Standby™️ 10 Day Trial today.