Skip to main content

DATA GUARD BASICS


DATA  GUARD


  • Data Guard is basically a ship redo and then apply redo, as you know redo is the information needed to recover a database transaction. A production database referred to as a primary database transmits redo to one or more independent replicas referred to as standby databases.
  • Redo information needed to recover a database transaction.
  • Remember that the LGWR can write to the log file using "group" commits, basically entire list of redo entries of waiting transactions (not yet committed) can be written to disk in one operation, thus reducing I/O. Even through the data buffer cache has not been written to disk, Oracle guarantees that no transaction will be lost due to the redo log having successfully saved any changes.
  • A production database referred to as a primary database transmits redo to one or more independent replicas referred to as standby databases.
  • A standby database will also automatically re synchronize if it becomes temporary disconnected to the primary due to power outages, network problems, etc.
  • NOTE: Data Guard Redo Transport Services coordinate the transmission of redo from the primary database to the standby database. When LGWR processing redo, a separate Data Guard process called the Log Network Server (LNS) is reading from the redo buffer in the SGA and passes redo to Oracle Net Services from transmission to a standby database.
  • The process Remote File Server (RFS) receives the redo from LNS and writes it to a sequential file called a standby redo log file (SRL), the LNS process support two modes synchronous and asynchronous.
  • The Managed Recovery Process (MRP) actually does the work of maintaining and applying the archived redo logs.
  • Synchronous transport (SYNC): is also referred to as "zero data loss" method because the LGWR is not allowed to acknowledge a commit has succeeded until the LNS can confirm that the redo needed to recover the transaction has been written at the standby site.
  • The user commits a transaction creating a redo record in the SGA, the LGWR reads the redo record from the log buffer and writes it to the online redo log file and waits for confirmation from the LNS
  • The LNS reads the same redo record from the buffer and transmits it to the standby database using Oracle Net Services, the RFS receives the redo at the standby database and writes it to the SRL
  • When the RFS receives a write complete from the disk, it transmits an acknowledgment back to the LNS process on the primary database which in turns notifies the LGWR that the transmission is complete, the LGWR then sends a commit acknowledgment to the user
  • NOTE: This setup really does depend on network performance and can have a dramatic impact on the primary databases, low latency on the network will have a big impact on response times. The impact can be seen in the wait event "LNS wait on SENDREQ" found in the v$system_event dynamic performance view.

  • Asynchronous transport (ASYNC) is different from SYNC in that it eliminates the requirement that the LGWR waits for an acknowledgment from the LNS, creating a "near zero" performance on the primary database regardless of distance between the primary and the standby locations.
  • The LGWR will continue to acknowledge commit success even if the bandwidth prevents the redo of previous transaction from being sent to the standby database immediately.
  • If the LNS is unable to keep pace and the log buffer is recycled before the redo is sent to the standby, the LNS automatically transitions to reading and sending from the log file instead of the log buffer in the SGA.
  • Once the LNS has caught up it then switches back to reading directly from the buffer in the SGA.
  • The log buffer ratio is tracked via the view X$LOGBUF_READHIST a low hit ratio indicates that the LNS is reading from the log file instead of the log buffer, if this happens try increasing the log buffer size.
  • The drawback with ASYNC is the increased potential for data loss, if a failure destroys the primary database before the transport lag is reduced to zero, any committed transactions that are part of the transport lag are lost. So again make sure that the network bandwidth is adequate and that you get the lowest latency possible.
  • NOTE: Oracle recently released Advanced Compression option this new product contains several compression features, one of which is redo transport compression for Data Guard, it supports both SYNC and ASYNC. Like all compression tools it does have an impact on CPU resources but it will lower network bandwidth utilization.
  • A log file gap occurs whenever a primary database continues to commit transactions while the LNS process has ceased transmitting redo to the standby database due to any issue (network issues).
  • Data Guard uses an ARCH process on the primary database to continuously PING the standby database during the outage.
  • When the standby database eventually comes back, the ARCH process queries the standby control file (via the RFS process) to determine the last complete log file that the standby received from the primary.
  • The ARCH process will then transmit the missing files to the standby database using additional ARCH processes, at the very next log switch, the LNS will attempt and succeed in making a connection to the standby database and will begin transmitting the current redo, while the ARCH processes resolve the gap in the background.
  • Once the standby apply process is able to catch up to the current redo logs, the apply process automatically transitions out of reading the archive redo logs and into reading the current SRL. The whole process can be seen in the diagram below.


PHYSICAL STANDBY

  • Redo apply is basically a block-by-block physical replica of the primary database, redo apply uses media recovery to read records from the SRL (Stand by Redo Log) into memory and apply change vectors directly to the standby database. Media recovery does parallel recovery for very high performance, it comprises a media recovery coordinator (MRP0) and multiple parallel apply processes (PR0?).
  • On the primary during redo transport - LGWR, LNS, ARCH use the DB_ULTRA_SAFE parameter
  • On the standby during redo apply - RFS, ARCH, MRP, DBWR use the DB_BLOCK_CHECKSUM and DB_LOST_WRITE_PROTECT parameters
  • If Data Guard detects any corruption it will automatically fetch new copies of the data from the primary using gap resolution process in the hope of that the original data is free of corruption.

  • The key features of this solution are:

  • Complete application and data transparency - no data type or other restrictions
  • Very high performance, least managed complexity and fewest moving parts
  • End-to-end validation before applying, including corruptions due to lost writes
  • Able to be utilized for up-to-date read-only queries and reporting while providing DR
  • Able to execute rolling database upgrades beginning with Oracle Database 11g

LOGICAL STANDBY:


  • SQL apply uses the logical standby process (LSP) to coordinate, the apply changes to the standby database. SQL apply requires more processing than redo apply, the processes that make up SQL apply, read the SRL and "mine" the redo by converting it to logical change records and then building SQL transactions and applying SQL to the standby database and because there are more moving parts it requires more CPU, memory and I/O then redo apply
  • SQL apply does not support all data types, such as XML in object relational format and Oracle supplied types such as Oracle spatial, Oracle intermedia and Oracle text.
  • The benefits to SQL apply is that the database is open to read-write while apply is active, while you can not make any changes to the replica data you can insert, modify and delete data from local tables and schemas that have been added to the database, you can even create materialized views and local indexes. This makes it ideal for reporting tools, etc to be used.

The key features of this solution are

  • A standby database that is opened for read-write while SQL apply is active
  • A guard setting that prevents the modification of data that is being maintained by the SQL apply
  • Able to execute rolling database upgrades beginning with Oracle Database 11g using the KEEP IDENTITY clause

Note: that if you have multiple standby databases you could use both solutions.

  • NOTE: Data Guard can support multiple standby databases in a single configuration; they may or may not have the same protection mode settings depending on your requirements.


DG PROTECTION MODES:

Maximum Performance:
  • This mode requires ASYNC redo transport so that the LGWR process never waits for acknowledgment from the standby database, also note that Oracle no longer recommends the ARCH transport method in previous releases is used for maximum performance.
  • Note that you will probably lose data if the primary fails and full synchronization has not occurred, the amount of data loss is dependent on how far behind the standby is processing the redo. This is the default mode.

Maximum Availability:
  • Its first priority is to be available its second priority is zero loss protection, thus it requires the SYNC redo transport. In the event that the standby server is unavailable the primary will wait the specified time in the NET_TIMEOUT parameter before giving up on the standby server and allowing the primary to continue to process. Once the connection has been re-established the primary will automatically re synchronize the standby database.
  • When the NET_TIMEOUT expires the LGWR process disconnects from the LNS process, acknowledges the commit and proceeds without the standby, processing continues until the current ORL is complete and the LGWR cycles into a new ORL, a new LNS process is started and an attempt to connect to the standby server is made, if it succeeds the new ORL is sent as normal, if not then LGWR disconnects again until the next log switch, the whole process keeps repeating at every log switch, hopefully the standby database will become available at some point in time. Also in the background if you remember if any archive logs have been created during this time the ARCH process will continually ping the standby database waiting until it come online.
  • You might have noticed there is a potential loss of data if the primary goes down and the standby database has also been down for a period of time and here has been no re synchronization, this is similar to Maximum Performance but you do give the standby server a chance to respond using the timeout.

Maximum Protection:

  • The priority for this mode is data protection, even to the point that it will affect the primary database. This mode uses the SYNC redo transport and the primary will not issue a commit acknowledgment to the application unless it receives an acknowledgment from at least one standby database, basically the primary will stall and eventually abort preventing any unprotected commits from occurring. This guarantees complete data protection, in this setup it is advised to have two separate standby databases at different locations with no Single Point of Failures (SPOF's), they should not use the same network infrastructure as this would be a SPOF.

Switchover and Failover

  • Data Guard uses two terms when cutting over the standby server, switchover which is a planned and failover which a unplanned event
  • Switchover is a planned event, it is ideal when you might want to upgrade the primary database or change the storage/hardware configuration (add memory, CPU networking), you may even want to upgrade the configuration to Oracle RAC.

What happens during a switchover is the following

Notifies the primary database that a switchover is about to occur
Disconnect all users from the primary database
Generate a special redo record that signals the End of Redo (EOR)
Converts the primary database into a standby database
Once the standby database applies the final EOR record, guaranteeing that no data loss has been lost, converts the standby database into the primary database.

  • The new standby database (old primary) starts to receive the redo records and continues process until we switch back again. It is important to remember that both databases receive the EOR record so both databases know the next redo that will be received.
  • You can even switch over form a Linux database to a windows database from a 64 bit to a 32 bit database which is great if you want to migrate to a different O/S of 32/64 bit architecture, also your rollback option is very easy simply switchback if it did not work.
  • Failover is a unplanned event, this is where the EOR was never written by the primary database, the standby database process what redo it has then waits, data loss now depends on the protection mode in affect.
  1. Maximum Performance - possible chance of data loss
  2. Maximum Availability - possible chance of data loss
  3. Maximum Protection - no data loss
  • You have the option to manually failover or make the whole process automatic; manual gives you the DBA maximum control over the whole process obliviously the length time of the outage depends on getting the DBA out of bed and failing over. Otherwise Oracle Data Guard Fast-Start Failover feature can automatically detect a problem and failover automatically for you. The failover process should take between 15 to 25 seconds.

Data Guard Management

You have three options on which to manage Data Guard

SQL*Plus
Data Guard Broker - distributed management tool that centralizes management, uses DGMGRL command line.
Enterprise Manager - provides a GUI to the Data Broker replacing DGMGRL

  • You can check the v$dataguard_status view. Select message from v$dataguard_status;
  • The V$REDO_DEST_RESP_HISTOGRAM view contains response time data for each redo transport destination. This response time data is maintained for redo transport messages sent via the synchronous redo transport mode.
  • Oracle 11g has introduced the Oracle Snapshot Standby Database. In Snapshot Standby Database a physical standby database can easily open in read-write mode and again you can convert it back to the physical standby database. This is suitable for test and development environments and also maintains protection by continuing to receive data from the production database and archiving it for later use.




Comments

Popular posts from this blog

Registering The Database to RMAN catalog database:

Registering The Database to RMAN catalog database: Need to start RMAN as follows: RMAN target=sys/password@database_to_backup rcvcat=sys/password@recovery_catalog_database Another variation on the command, if the recovery catalog and the database were on the same server, might be as shown: oraenv ORACLE_SID = [KKUY] ? KKUY RMAN rcvcat=sys/password@recovery_catalog_database RMAN> connect target Recovery Manager: Release 8.0.5.1.0 - Production RMAN-06005: connected to target database: KKUY RMAN-06008: connected to recovery catalog database Use the below command to register the database. RMAN>register database; Want to verify if a database is registered in the recovery catalog. To do this, connect to RMAN and issue the command LIST INCARNATION OF DATABASE. RMAN> list incarnation of database; RMAN-03022: compiling command: list RMAN-06240: List of Database Incarnations RMAN-06241: DB Key Inc Key DB Name DB ID      CUR Reset SCN   Reset Time RMAN

ORA-39014: One or more workers have prematurely exited.ORA-00018: maximum number of sessions exceeded

ERROR: I was Performing a full database import and during the import I faced the below error. ORA-39014: One or more workers have prematurely exited. ORA-39029: worker 6 with process name "DW07" prematurely terminated ORA-31672: Worker process DW07 died unexpectedly. Job "SYSTEM"."SYS_IMPORT_FULL_04" stopped due to fatal error at 00:59:40 ORA-39014: One or more workers have prematurely exited. SOLUTION:  Run the import with fewer parallel processes, like PARALLEL=2 instead of 8. I was able to run the import successfully. NOTE 1: This errors occurs when there are less session allocation in the database. check the session,process parameters and increase them accordingly. To avoid such errors again. NOTE 2 : Note: Increasing processes parameter increases the amount of shared memory that needs to be reserved & the OS must be configured to support the larger amount of shared memory. So here we first need to increase the Memory & SG