Skip to main content

Explain Snap Shot StandBy database and the use of it

Explain Snap Shot StandBy database and the use of it?

Able to open standby database in read-write. This is very useful to turn our standby database into application testing / development purposes. This is achieved through snapshot standby database.

In order to create snapshot standby database:

1. Standby database must be a physical standby database
2. Flashback logging must be enabled on both primary and standby database
3. After enabling flashback mode, connect to dgmgrl utility in the primary database:
4. DGMGRL> CONVERT DATABASE STDBY TO SNAPSHOT STANDBY;
SQL> alter database convert to snapshot standby;

NOTE: Snap Shot StandBy – Is recommended if we have more than one standby database to the primary database.
From now on, we can do any testing (like creating new schemas, tables and so on) on our standby database. Please note that at this point of time; all the redo generated in our production database will be still shipped to standby database. 
But it is just that not going to be applied until the database is converted into physical standby mode.

5. Once the testing done, you can convert the snapshot standby database to physical standby with just 1 command:
DGMGRL>CONVERT DATABASE STDBY TO  PHYSICAL STANDBY;
6. Note that when the above command executed
All the changed made to the snapshot standby database (such as creating new schemas, tables) has been terminated
Previous physical standby database state is initiated
Physical standby database is mounted and MRP process is initiated. MRP will apply all the logs which was shipped yet applied during the snapshot standby database state.
7. Duration for this process is depends on few factors:
The amount of changes made to the database during the snapshot standby database state. More changes will lead to more time to rewind the changes via flashback database option.
The amount of archived logs generated during the snapshot standby database state. More archive logs will cause more time to apply it when the database is converted to physical standby database.

Comments

Popular posts from this blog

Registering The Database to RMAN catalog database:

Registering The Database to RMAN catalog database: Need to start RMAN as follows: RMAN target=sys/password@database_to_backup rcvcat=sys/password@recovery_catalog_database Another variation on the command, if the recovery catalog and the database were on the same server, might be as shown: oraenv ORACLE_SID = [KKUY] ? KKUY RMAN rcvcat=sys/password@recovery_catalog_database RMAN> connect target Recovery Manager: Release 8.0.5.1.0 - Production RMAN-06005: connected to target database: KKUY RMAN-06008: connected to recovery catalog database Use the below command to register the database. RMAN>register database; Want to verify if a database is registered in the recovery catalog. To do this, connect to RMAN and issue the command LIST INCARNATION OF DATABASE. RMAN> list incarnation of database; RMAN-03022: compiling command: list RMAN-06240: List of Database Incarnations RMAN-06241: DB Key Inc Key DB Name DB ID      CUR Reset SCN   Reset Time RMAN

ORA-39014: One or more workers have prematurely exited.ORA-00018: maximum number of sessions exceeded

ERROR: I was Performing a full database import and during the import I faced the below error. ORA-39014: One or more workers have prematurely exited. ORA-39029: worker 6 with process name "DW07" prematurely terminated ORA-31672: Worker process DW07 died unexpectedly. Job "SYSTEM"."SYS_IMPORT_FULL_04" stopped due to fatal error at 00:59:40 ORA-39014: One or more workers have prematurely exited. SOLUTION:  Run the import with fewer parallel processes, like PARALLEL=2 instead of 8. I was able to run the import successfully. NOTE 1: This errors occurs when there are less session allocation in the database. check the session,process parameters and increase them accordingly. To avoid such errors again. NOTE 2 : Note: Increasing processes parameter increases the amount of shared memory that needs to be reserved & the OS must be configured to support the larger amount of shared memory. So here we first need to increase the Memory & SG