ARCHITECTURE
What is oracle database?
Oracle Database is a relational database
management system (RDBMS) which is used to store and retrieve the large amounts
of data. Oracle Database has physical and logical structures. Logical
structures and physical structures are separated from each other.
What is schema?
Schema is a logical collection of database
objects of a user. For example tables, procedures, pkgs, functions, triggers,
database links etc.
What is a Tablespace?
It is a logical area of storage in a
database that directly correspondents to one more physical datafiles.
What is a Control File?
Control files record control information of
the all the files within the database. They used to maintain internal consistency
and play a vital role during database recovery. Oracle automatically modifies
the control files which users cannot edit.
What is a Segment?
A segment is a collection of extents which
is used for storing a specific data structure and resides in the same
tablespace.
What is an Extent?
Extent is a collection of Continuous data
blocks, which is used for storing a specific type of information.
Define data blocks?
Data Blocks are the base unit of logical
database space. Each block represents with a specific number of bytes. Default
block size 8192K
What is datafiles?
Each database has one or more physical
datafiles to actually store the data of all logical structures in a tablespace.
Data file is divided into segments, extents and blocks.
What is Rollback Segment?
Database contains one or more Rollback
Segments to roll back the previous transactions and data recovery.
What are the different types of
Segments?
Data segment (for storing User Data),Index
segment (for storing index), Rollback Segment and Temporary Segment.
What is a Redo Log?
The primary function of redo logs is t
record all the changes made to the database before they written to the data
files. These files can also be mirrored and used recovery operations.
What is a table Cluster?
Table Cluster is a group of related tables
that share common columns are store related data in the same block.
What is a cluster Key?
The common column or group of columns
associated with the clustered tables is called cluster Key. Advantage of using
cluster key is that the common columns will be stored only once.
What is a synonym?
Synonym is the alias name for a table,
view, sequence or program unit.
What are the two types of
Synonyms?
Two types of Synonyms are private and public.
A private synonym can be accessed by its owner only, whereas the public synonym
can be accesses by any DB user.
Default is private synonym. If you want
PUBLIC we need to specify CREATE PUBLIC while creating synonym.
What is System Global Area (SGA)?
It is a shared memory region allocated by
oracle. That contains data and control information of an oracle instance. SGA
is divided into buffer cache, redolog buffer and shared pool area.
What is instance?
SGA and back ground process constitute an
instance.
What is redolog buffer?
It stores redo entries. A log of changes
made to the database.
What is a shared pool?
Shared pool divided into 2 parts. One is
Library cache and data dictionary cache.
What is Library cache?
Oracle parses the statement and determines
the most efficient execution plan for a statement when a sql statement is
issued. Then oracle caches the statement in shared pool. if any other user
requests the same statement oracle shares the statement already in the memory rather
than repeating the same steps (OR) to avoid re-parsing of the statement.
What is dictionary cache?
Oracle continuously request and updated the
Data dictionary of the database to maximize the performance of the system
internal operations. The data dictionary cache holds dictionary information.
Oracle Data dictionary contains Meta data about the tables owned by SYSTEM and
SYS schemas. Proper sizing of data directory cache allows fast retrieval of
data from data dictionary.
What is Program Global Area
(PGA)?
PGA is a memory buffer that contains data
and control information of the server processes. It is a non-shared memory
created by Oracle Database when a server process is started. Access to the PGA
is exclusive to the server process. There is one PGA for each server process.
Background processes also allocate their own PGAs.
What is Database Buffer Cache?
Database buffer cache holds data blocks
read from data files. Each buffer can hold one database block.
Buffer cache holds most recently used blocks
of data; it can also have modified data that has not yet been permanently
written to disk. When row in a table is updated the foreground server process
reads the datafile information on to the disk into the buffer cache. Hence
modifying the data block in the server memory. If any user requests the new
data from the disk, if no data blocks or freely available in the buffer cache,
then DBWR writes the blocks from the buffer to the appropriate datafiles using
LRU mechanism.
What is a cursor?
When a DML statement like INSERT, UPDATE,
DELETE, or MERGE is executed or when SELECT query is executed, the information
like (statement and the rows of data accessed by it) will be stored in private
SQL area. Cursor is a pointer to this private SQL area.
What are the important back
ground processes in oracle database?
DBWR, LGWR, CKPT, SMON, PMON and ARCN these
are the six important BG process in oracle database.
What is DBWR?
DBWR writes blocks from the buffer cache to
the appropriate datafiles. It’s writes a block back to disk when timeout check
occurs, the foreground server’s process doesn't have enough free buffer in the
memory OR oracle performs a check point.
What is LGWR?
LGWR writes redolog entries generated in
the redo log buffer to an online redolog files. As when a transaction is
carried out oracle creates small records called redo entries. This contains
just enough information to regenerate changes made by the transactions. Oracle
temporarily stores this transaction redo entries in the server redo log buffer
for all system transaction. Oracle doesn't consider a statement as commit until
LGWR writes information in the redolog files.
What is CKPT?
It is responsible for signaling the DBWR at
check points and updating the data files header & control files of the
database. This duty can be performed by LGWR. Check point indicates how much of
transaction redo entries oracle must apply when a server crash occurs and
database recovery is required.
What is SMON?
It performs instance recovery in instance
startup in multiple instances.
Recover other instances that have failed in
cluster environment.
Clean up temporary segments that are no
longer in use.
Recover dead transactions skipped crash and
database recovery.
Coalesce free extent with in the database.
To make free space contiguous and easy to allocate.
What is PMON?
This process performs process recovery when
a user process is failed. It is responsible for cleaning up the database buffer
cache and freeing resources. In the shared server environment its checks on the
dispatcher and server process restarting them at the time of failure.
What is ARCN (ARCH)?
This process copy filled online redo logs
files to the specified destination. The process is active when the database in
archive log mode.
What is an oracle steam?
Streams enable information sharing. Using
oracle streams we can share the data and events in a stream. Streams can propagate
information within the database or one database to another database. Streams routes
specified information to the specified destination.
How undo management happens in
the database?
1. Earlier releases of Oracle Database used
rollback segments to store undo data.
2. Oracle 9i introduced automatic undo
management, which simplifies undo space management by eliminating the
complexities associated with rollback segment management.
3. Undo segment provides read consistency,
roll back transactions, recover the database and also used in flashback
feature.
4. We can have multiple undo table spaces
but the instance will access only one which is specified in init.ora UNDO_TBS1.
This parameter can be changed using alter system.
5. During database recovery, undo records
are used to undo any uncommitted changes applied from the redo log to the
datafiles.
6. Undo records provide read consistency by
maintaining the before image of the data for users who are accessing the data
at the same time that another user is changing it.
After transaction is committed the Undo
information still exists in the UNDOTBS it will over write based on the
UNDO_RETENTION parameter value which is in init.ora file, the default value is
900 sec.
How to check the database is
using pfile or spfile?
Connect to the specific database and issue
below CMD. By connecting to the SYS
user.
SQL> show parameter Pfile (or) Spfile;
How to add a new control file to
the existing database?
The first step in multiplexing the control
files is to see where the exiting control files are located.
SQL> show parameter Control_Files; -
Here we have two control files exists already, hence adding another one.
In order to add a control we need to update
the CONTROL_FILES parameter with the new location using the ALTER SYSTEM SET
CONTROL_FILES command.
SQL> alter system
2 set
control_files='/u02/app/oracle/oradata/orcl/ctl/control01.ctl',
3
'/u01/app/oracle/flash_recovery_area/orcl/control02.ctl',
4
'/u03/app/oracle/oradata/orcl/ctl/control03.ctl'
5
scope=spfile;
System altered.
SQL> shutdown immediate
Now copy the one of the existing control
file to the to new location.
[oracle@ora1 ~]$ cp
/u02/app/oracle/oradata/orcl/ctl/control01.ctl
/u03/app/oracle/oradata/orcl/ctl/control03.ctl
Start database as normal.
SQL> startup
One of the control file got
corrupted. What is your action to recover it?
Control files are mirrored in different
location. If any one of the control file is corrupted. Just shut down the
database and copy the good control file over the bad control file & start
the instance. Always it’s better to have more than 1 control file to avoid
issues like this.
If there are 3 control files and
one of the control file is delete while the database is functioning, what will
happen?
1. Nothing to worry. If database is up and
running, just quickly copy the good control file over the bad control file.
Then invoke the check point using the below.
SQL>alter system checkpoint;
2. If database is up and running there
won’t be any kind of issues may be. If we shut down and start the database,
that time it may through an error. So just copy and paste good control file
over the bad control file. And start the database.
3. If the database is hung status. Open
another session and shutdown abort the database. Now copy the good control file
over the bad control file & start up the database as normal.
All control files got corrupted.
How will you recover those??
It never happened with me. But whenever the
structure level changes happen in the database, will back up the control files
using 'Alter database backup control file to trace' command. It will
create generate a trace file in udump. Using this we can restore the control
files.
Using RMAN backup also we can restore the
control file.
Using control file backup, we can restore
the lost one. We can do it manually (OR) using rman.
SQL> shutdown abort;
SQL> startup nomount;
@create_control_files.sql
If you know all your datafiles, you can
create your control file.
1. Start DB in nomount.
2. Create control file script and execute
it.
3. Alter database open.
If one of the logfile is removed
while the database is functioning what will happen?
Always it’s better to have mirrored log
file members for each group, If
the any of the log file got deleted by mistake. Just switch the log file and
replace the log file with good one.
Minimum we have 2 redolog log groups for
each database instance.
Note: Firstly ORACLE will never allow you
to drop the current ONLINE CURRENT redo log file. If trying to drop also it
will through an error. We can drop and recreate redo log files which inactive
status. But we cannot drop the CURRENT log file.
How check the listener is running
from which home?
First check how many oracle homes we have.
If we more than one oracle home & more than an one listener in that server.
Then try to find how many LISTENERS are
running using the below CMD
/home/oracle: ps -ef | grep tns
Incase multiple listeners are running. Use
the below command specifying the Listener name. From here we can now.
/home/oracle: lsnrctl status LISTENER
What is flash back database?
Advantage of flashback database? How to use it?
Oracle Flashback database Technology
introduced in oracle 10G
Using this feature we can restore the
entire database to the previous state using restore point.
To enable flashback technology the database
has to be archive log mode and the flashback logging also should be on.
We need to add the below parameters in the
init.ora file to enable flash back technology.
db_recovery_file_dest=<Location>
db_recovery_file_dest_size=<SIZE>
The below for archive log mode
log_archive_dest=<Location>
log_archive_format='%r_%t'
Create a fresh Pfile from spfile and add
the above parameters in Pfile.
Now mount the database with the pfile.
SQL> Alter database archive log;
SQL> Alter database flashback on;
SQL> alter database open.
How will you increase the redolog
size?
The Redo Logs must be dropped and recreated
for changing the redo log size. It can be done online without shutting down the
database. However, you need to make sure that the Redo Group being dropped
should be INACTIVE when you do this.
SQL> select GROUP#,
THREAD#, STATUS from v$log;
SQL> alter database drop
logfile group <number>; - Which is inactive
SQL> alter database add
logfile group 3('D:\DATA_XE\log1a.ora','D:\DATA_XE\log2a.ora') size 10m;
If someone one accidently kill
the LGWR background process? What
happened in database side??
Have never faced this scenario but as of
knowledge.
At the SQL prompt if we try to kill the
LGWR process its through an error like below
SQL> alter system kill
session '45,1';
alter system kill session
'45,1'
*ERROR at line 1:
ORA-00029: session is not a
user session
If the LGWR process terminates due to any
issue. Data base instance will stop. We need to restart the database (OR) we
look for the alert log file for the cause and what action needs to take based
on that.
If one of the DBA accidently adds
the data file into wrong tablespace. and data available that datafile. But
client asked to do rollback? How will you do that?
Oracle not provided an interface for
dropping datafiles in the same way that you could drop a schema object such as
a table, a view, a user, etc.
Once you make a datafile part of a
tablespace, the datafile CANNOT be removed.
We can movie the datafile location from
wrong location to the standard location. That’s needs to be done in mount stage
of the database. Or we can export & import. Before starting we can drop the
whole tablespace & recreate with required datafiles;
Orainventory lost how will you
recover?
First check the inventory location, using
the below.
cat /etc/oraInst.loc
inventory_loc=/opt/oracle/oraInventory
inst_group=oinstall
We CAN restore it from backup if we backup
for that. If we have no backup is available. We can restore the inventory by using
the oracle installer available in the oracle home. Even if the inventory is
lost completely. Recreating it a simple call to the oracle installer, will
recreate it and register the ORACLE_HOME.
$ORACLE_HOME/oui/bin/runInstaller
-silent -attachHome ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=<Home_Name>
How to recompile invalid
components?
Using '@?/rdbms/admin/utlrp' We can compile
whole Database components.
For schema level we can create a 'Select
Alter' statement like below and log all the invalid objects in a file and
execute.
select 'alter
'||OBJECT_TYPE||' '||OWNER||'.'||OBJECT_NAME||' COMPILE;' from dba_objects
where owNer='TICKET_DM' and status='INVALID';
How to identify which all
databases & listeners running on the server when server is up?
For Database => ps -ef |
grep pmon (OR) ps -ef | grep smon
For Listemer => ps -ef |
grep tns (OR) ps -ef | grep listener
Why to use CASACDE option while
dropping a user?
Oracle Database does not drop users whose
schemas contain objects unless you specify CASCADE or unless you first
explicitly drop the user's objects.
Specify CASCADE to drop all objects in the
user's schema before dropping the user. You must specify this clause to drop a
user whose schema contains any objects.
What is ORATAB and what is it
used for?
Oratab" is a file created by Oracle in
the /etc or /var/opt/oracle directory when installing database software.
Used to list the databases and software
versions installed on a server and also the "oraenv" utility uses
ORATAB to set the correct environment variables.
The oratab file contains entries in the
form of ORACLE_SID:ORACLE_HOME:Y|N. The Y|N flags indicate if the instance should
automatically start at server boot time (Y=yes, N=no).
What are root.sh and
orainstRoot.sh files used for?
When installation is almost complete, you
are prompted to run two scripts, orainstRoot.sh and root.sh, as root.
/u01/app/oraInventory/orainstRoot.sh
Changing permissions of
/u01/app/oraInventory.
Adding read, write
permissions for group.
Removing read, write, execute
permissions for world.
Changing groupname of
/u01/app/oraInventory to oinstall.
The execution of the script
is complete.
/u01/app/oracle/product/11.2.0/dbhome_1/root.sh
Running Oracle 11g root.sh
script...
The following environment
variables are set as:
ORACLE_OWNER= oracle
ORACLE_HOME= /u01/app/oracle/product/11.2.0/dbhome_1
Enter the full pathname of
the local bin directory: [/usr/local/bin]:
Copying dbhome to /usr/local/bin ...
Copying oraenv to /usr/local/bin ...
Copying coraenv to
/usr/local/bin ...
Creating /etc/oratab file...
Entries will be added to the
/etc/oratab file as needed by
Database Configuration
Assistant when a database is created
Finished running generic part
of root.sh script.
Now product-specific root
actions will be performed.
Finished product-specific
root actions.
LINK :>
https://docs.oracle.com/cd/E26370_01/doc.121/e26358/dbinstall.htm
11G NEW FEATURES
Table compression
SCAN listener
Database reply
RAC One Node Concept
RMAN UNDO bypass
Automatic Diagnostic Repository (ADR)
Installation and Upgrade Enhancements
Storage Enhancements
Change Management Overview and SQL Plan
Management
Database Replay and Automatic SQL Tuning
Intelligent Infrastructure Enhancements
Performance Enhancements
Partitioning and Storage-Related
Enhancements
Using RMAN Enhancements, Flashback and
LogMiner
Diagnostic Enhancements and Using the Data
Recovery Advisor
New Security Features
Secure Files and Miscellaneous New Features
DATAPUMP
Is it possible to take backup of
11g and import into 10g? If yes how will you do that?
For whole database it's not possible. But
schema level we can do other than database schemas. Have tried exporting schema
data from 10G database and imported in the 11g database without any issues.
What are uses of expdp compare to
normal exports?
1. Exp/imp we have up to 9i.DataPump is a
10G new feature.
2. Compare to normal exp/imp DataPump is
very fast for loading and un-loading the data.
3. Exp/imp loads and un-loads the BIT
level. But DataPump do it in BYTE level, 1 BYTE = 8 BITS
4. Datapump operates on multiple files
called dump files. But exp/imp operates only single file.
5. Datapump access files in the server
(using ORACLE directories). There is no concept like ORACLE directories in
exp/imp.
6. Exp/Imp represents metadata information
as DDL's in the dumpfile. But DataPump represents in XML document format.
7. Datapump uses parallel execution. But
exp/imp uses single steams of execution.
8. Datapump will create the user if that's
not exists. But here DBA need to create manually before import starts.
9. Using DataPump we can stop, resume and
restarts the job.
10. If there no sufficient space while
importing data, DataPump will resume until DBA to add space. But here it will
fail directly.
11. Whenever Data Pump export or import is
running, Oracle will create a table with the JOB_NAME and will be deleted once
the job is done. From this table, Oracle will find out how much job has
completed and from where to continue etc
12. Datapump we can set PARALLEL parameter
while doing exp/imp(DATAPUMP)
What is exp compress and expdp
compression difference?
In expdp we have a parameter called
COMPRESSION. But exp we need to OS parameter to compress files OR we can do
using PIPE option in exp/imp
LINK >
http://www.acehints.com/2011/05/how-to-compress-export-exp-dump-files.html
LINL >
http://www.acehints.com/2011/07/oracle-11g-data-pump-expdp-compression.html
I started backup 11:00 AM, but I
want to take expdp backup until 10.00 AM only. I don't want backup after 10:00
am DB changes into that backup dump? How will you do that?
Have tried taking one hour old data for a
database schema using FLASHBACK_TIME, in DataPump. I think works with undotbs.
FLASHBACK_TIME="2015-09-13
09:21:00"
export EXP_TIME=`date -d
"-2 min" +"%Y-%m-%d %H:%M:00"`
How to export only Meta data
using DataPump?
If possible to back up the
Meta data using CONTENT=METADATA_ONLY parameter.
How can you do schema refresh?
What are the best parameters?
We use DataPump to do a schema refresh from
PROD - UAT, MIGR and DEV databases.
EXPDP =>
USERID/PWD, SCHEMAS, DIRECTORY, DUMPFILE, PARALLEL, FLAHSBACK_TIME, JOB_NAME
AND LOG
IMPDP =>
USERID/PWD, REMAP_SCHEMA, REMAP_TABLESPACE, DIRECTORY, DUMPFILE, PARALLEL,
JOB_NAME, LOG and EXCLUDE
LINUX
How to check os mount points ?
df -h
=> Using this we can check OS mount points.
PATCH
How do you rollback patch with
opatch?
Just go to the opatch location and issue
the below.
opatch rollback -id
How will you apply the patch on
11.2.0.2 database, explain in detail?
1. We
have a centralized server to keep all the software's and patch once downloaded
from MOS.
2. Once we download the patch, Will create
a temporary directory and copy or move the software here.
3. Will open change ticket to get the
proper approvals and required downtime. Will notify all the application teams
before 1 or 2 days regarding this maintenance activity and inform the down time
window as well.
4. We need to back up the oracle home
location before applying patch using tar command.
5. NOC - network operations center will
bring all the application.
6. Will add block entry in the OEM and shut
down all the databases running under that oracle_home.
7. Then will unzip the down loaded patch.
8. Export all the below parameters.
export ORACLE_HOME=<HOME>
export ORACLE_SID=<SID>
export
PATH=$ORACLE_HOME/OPatch:$PATH
9. Will issue opatch lsinventory - To check
what all patches applied till now.
10. Then will go to that opatch directory
and issue opatch apply. Opatch apply and the patch directory path
Opatch apply /tmp/2365617
11. It will prompt whether all the db's
shutdown . Just we need press Y and the patching will start.
12. We can find the log in
/oracle/product/10.2.0/db_1/cfgtoollogs/opatch/
13. Once patch applied will bring up all
the databases back and run the below.
@?/rdbms/admin/catbundle.sql psu
apply
@?/rdbms/admin/utlrp.sql
NOTE: We
need to have Opatch Utility & The patch to be applied > We need these
both files.
How will you check patch
conflict?
Using opatch apply command we can check
what patched applied till now.
To check whether there are any conflicts
among the patches to be applied, which will check if there are any conflicts
among the patches to be applied, by giving out the detailed information about
the conflicts/supersets?
opatch prereq CheckConflictAmongPatchesWithDetail
-phBaseDir => <path_where_all_the_patches_to_be_applied_are_kept>
To check whether there are any conflicts
between the patches to be applied and the patches already applied in the
$ORACLE_HOME:
opatch prereq
CheckConflictAgainstOHWithDetail -phBaseDir
<path_where_all_the_patches_to_be_applied_are_kept>
How to check what are the patches
installed in database level and server level?
export PATH=$ORACLE_HOME/OPatch:$PATH
CD to OPATCH directory in ORACLE_HOME
Command => opatch
lsinventory
Comments
Post a Comment