Skip to main content

Posts

Showing posts from 2020

DATAPUMP EXPORT USING MULTIPLE DIRECTORIES IN ORACLE DATABASE

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - Production With the Partitioning, OLAP and Data Mining options SQL> select name from v$database; NAME --------- ORCL SQL> conn sys as sysdba Enter password: Connected. SQL> create directory krishna as 'e:\krishna\dump'; Directory created. SQL> create directory lattu as 'd\:krishna\dump'; Directory created. SQL> grant read,write on directory krishna to system; Grant succeeded. SQL> grant read,write on directory lattu to system; Grant succeeded. SQL> exit Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - Production With the Partitioning, OLAP and Data Mining options C:\Documents and Settings\u19791>expdp system/manager dumpfile=krishna:k.dmp,lattu:Ll.dmp logfile=krishna:tst.log parallel=2 SCHEMAS=SCOTT Export: Release 11.2.0.4.0 - Production on Wednesday, 30 November, 2018 22:59:20 . Connected to: Oracle Database 11g Enterprise Edition Release 11.2

LOCATE ALL RUNNING DATAPUMP JOBS IN ORACLE DATABASE

-- locate Data Pump jobs: SET lines 200 COL owner_name FORMAT a10; COL job_name FORMAT a20 COL state FORMAT a12 COL operation LIKE state COL job_mode LIKE state SELECT owner_name, job_name, operation, job_mode, state, attached_sessions FROM dba_datapump_jobs WHERE job_name NOT LIKE 'BIN$%' and state='NOT RUNNING' ORDER BY 1,2; spool off ;

CHECK ALL EXECUTION PLAN ID'S OF A SQLID IN ORACLE DATABASE

spool check_execution_plan_details.log set linesize 600 select a.sql_id,a.snap_id,to_char(b.begin_interval_time,'mm/dd/yyyy hh24:mi') snap_tm, a.INSTANCE_NUMBER,a.PLAN_HASH_VALUE,a.SORTS_DELTA, a.EXECUTIONS_DELTA,a.disk_reads_delta,a.buffer_gets_delta, a.rows_processed_delta,a.cpu_time_delta/1000000 cpu_time_delta, a.elapsed_time_delta/1000000 iowait_delta, a.clwait_delta/1000000 clwait_delta from dba_hist_sqlstat a, dba_hist_snapshot b where a.sql_id='&sql_id' and a.SNAP_ID=b.snap_id and a.instance_number=b.instance_number and a.dbid=b.dbid order by a.instance_number,a.snap_id desc,a.plan_hash_value; spool off;

SCRIPT TO CONNECT TO ALL DATABASE IN A SERVER

iru@/home/oracle/dba/scripts: cat check_all_database.sh #!/bin/sh ORATAB=/home/oracle/dba/scripts/oratab export ORATAB start() {         sqlplus -s '/as sysdba'<<! set head off set pagesize 100 select instance_name from v\$instance; select username from dba_users where username like '%AM%'; select username from dba_users where username like '%MA%'; select count(1) from all_objects where owner='RKEHOSKIE'; exit ! } cat $ORATAB| egrep -v '#|^$|ASM' | while read i do (         ORACLE_HOME=`echo ${i} | awk -F":" '{ print $2 }'`         ORACLE_SID=`echo ${i}| awk -F":" '{ print $1 }'`         export ORACLE_HOME ORACLE_SID         PATH=$ORACLE_HOME/bin:$PATH         export PATH         start echo "~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"         sleep 2 ) | tee -a  check.log done

CHECK SCHEMA STATS SCRIPT IN ORACLE DATABASE

 scripts]$ cat check_schema_stats.sh #!/bin/bash # #periodic analyze schema cron script # # to run #  analyze_schema <sid> <schema owner> # # # # ORACLE_SID=$1 ORAENV_ASK=NO SCHEMA_OWNER=$2 echo $SCHEMA_OWNER export ORACLE_SID export SCHEMA_OWNER export SCHEMA_PASSWD ORAENV_ASK=NO sqlplus -s '/as sysdba' <<eof ; spool /tmp/analyze_schema_$SCHEMA_OWNER.log SET SERVEROUTPUT ON SET TIME ON SET TIMING ON SET ECHO ON SELECT TRUNC(LAST_ANALYZED) LAST_ANALYZED, COUNT(*) TABLES FROM ALL_TABLES WHERE OWNER = '$SCHEMA_OWNER' GROUP BY TRUNC(LAST_ANALYZED) ORDER BY 1; SELECT TRUNC(LAST_ANALYZED) LAST_ANALYZED, COUNT(*) INDICES FROM ALL_INDEXES WHERE OWNER = '$SCHEMA_OWNER' GROUP BY TRUNC(LAST_ANALYZED) ORDER BY 1; spool off ; disconnect quit; eof

ACTIVE SESSION DETAILS SCRIPT IN ORACLE DATABASE

irut@/home/oracle/dba/scripts: cat active_session_history.sql spool active_session_history.log set pages 5000 set lines 500 set trims on col sample_time format a25 col event format a30 col sql_id format a15 col sql_text format a200 col session_id format 99999 col p1text format a10 col p2text format a30 select sh.inst_id,        sh.sample_time,        sh.sql_id,        sh.session_id,        sh.blocking_session,        sh.wait_time,        sh.time_waited,        sh.event,        sh.p1,        sh.p1text,        sh.p2,        sh.p2text,        (select distinct substr(t.sql_text,1,200)         from gv$sqlarea t         where sh.sql_id = t.sql_id) sql_text from gv$active_session_history sh where sh.event is not null --and sh.sample_time > sysdate-10/1440 and sh.session_id=&SID order by sh.sample_time,sample_id; spool off;

ORACLE DATABASE CREATE SCRIPT

/* Script creates PVPROD02 database Actions include;    Create Database    Create Tablespaces    Setup Dictionary    Create Application Roles    Create Application Users */ ------------------------------------------------------------------------------------------ -- set SID ------------------------------------------------------------------------------------------ define CR_SID = PVPROD02 spool create_db_&CR_SID set time on set timing on set echo on set pagesize 4000 set linesize 2000 set trims on   connect / as sysdba startup  nomount ------------------------------------------------------------------------------------------ -- Create Database ------------------------------------------------------------------------------------------ CREATE DATABASE &CR_SID ARCHIVELOG     USER SYS IDENTIFIED BY sys4&CR_SID     USER SYSTEM IDENTIFIED BY system4&CR_SID     MAXLOGFILES 50     MAXLOGMEMBERS 4     MAXDATAFILES 1000     MAXINSTANCES 8     MAXLOGHISTORY 20000     CHARACTER SET U

RMAN BACKUP SCRIPT

# set environment . $HOME/.bash_profile export ORACLE_SID=TWINTMP export ORAENV_ASK=NO . $ORACLE_HOME/bin/oraenv export USER=dbadmin export PASSWD=brownbag6 export RMAN_HOME=/backup/TWINTMP export RMAN_LIST_SFX=`date '+_%y%m%d_%H%M%S'`.log export BACKUP_DEST=/backup/TWINTMP ###################################################################### # Backup database to disk backup set (rman). ######################################################################  echo running rman TARGET $USER/$PASSWD LOG $RMAN_HOME/rman_data_bkup_$RMAN_LIST_SFX $ORACLE_HOME/bin/rman TARGET $USER/$PASSWD LOG $RMAN_HOME/rman_data_bkup_$RMAN_LIST_SFX  <<EOF show all; CONFIGURE CONTROLFILE AUTOBACKUP ON; CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT = '$RMAN_HOME/rman_bkup_TWINTMP%U' MAXPIECESIZE 2G; CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE DISK TO '$RMAN_HOME/rman_bkup_TWINTMP%F'; CONFIGURE SNAPSHOT CONTROLFILE NAME TO '$RMAN_HOME/snapcf_TWINTMP.f'; LIST BACKUP

DATAPUMP - EXPORT SCHEMA SCRIPT

#!/bin/ksh # # generic environment settings # . $HOME/.bash_profile   export ORAENV_ASK=NO . $ORACLE_HOME/bin/oraenv export ORACLE_SERVER=`uname -n` export ORACLE_SID=TEST export USER=dbadmin export PASSWD=white1112 export EXP_DIR=/backup/expdp/data/ export LOG_DIR=$EXP_DIR export EXP_SCHEMA=TEST export EXP_NAME=${ORACLE_SERVER}_${EXP_SCHEMA}_`date +%Y%m%d_%H%M` # # export schema and data # expdp $USER/$PASSWD SCHEMAS=$EXP_SCHEMA DIRECTORY=${ORACLE_SID}_expdp DUMPFILE=$EXP_NAME.dmp1,$EXP_NAME.dmp2,$EXP_NAME.dmp3, $EXP_NAME.dmp4 FILESIZE=8g PARALLEL=4 LOGFILE=$EXP_NAME.log #gzip -f ${EXP_DIR}/*.dmp? #find  $EXP_DIR -name "*.gz" -mtime +5 -type f -exec rm {} \; exit 0

DROP RESTORE POINT AND DELETING ALL THE ARCHIVE LOGS USING RMAN IN ORACLE

DROP RESTORE POINT AND DELETING ALL THE ARCHIVE LOGS USING RMAN IN ORACLE AND TURN OFF FLASHBACK AND ARCHIVE LOG MODE SQL> select name from v$database; NAME --------- PMITCKU SQL> column NAME format a40; SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#,               GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE               FROM V$RESTORE_POINT               WHERE GUARANTEE_FLASHBACK_DATABASE='YES'   ORDER BY TIME;  2    3    4    5 NAME                                            SCN ---------------------------------------- ---------- TIME --------------------------------------------------------------------------- DATABASE_INCARNATION# GUA STORAGE_SIZE --------------------- --- ------------ FEB09_2011                               2.2071E+10 09-FEB-11 02.08.02.000000000 PM                     1 YES    382599168 SQL> SELECT NAME FROM V$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE='YES'  ORDER BY TIME; NAME ---------------------------------------- FEB09_2

ADDING A DATAFILE TO A TABLESPACE WITH AUTOEXTEND OPTION IN ORACLE DATABASE.

irqadbapoc1@/home/oracle: . oraenv ORACLE_SID = [+ASM] ? PVDEV2 irqadbapoc1@/home/oracle: sqlplus "/as sysdba" SQL*Plus: Release 11.2.0.4.0 - Production on Fri Feb 11 09:08:44 2018 Connected to: Oracle Database 11g Enterprise Edition Release11.2.0.4.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select name from v$database; NAME --------- PVDEV2 SQL> alter tablespace TWINSD2 add datafile '+PVDEV2' size 2g autoextend on maxsize 30g; Tablespace altered. SQL> alter tablespace ROYALSD2 add datafile '+PVDEV2' size 2g autoextend on maxsize 10g; Tablespace altered. SQL> alter tablespace REDSOXD2 add datafile '+PVDEV2' size 2g autoextend on maxsize 10g; Tablespace altered.

HIGH LEVEL STEPS TO SETUP A STANDBY DATABASE IN ORACLE

1) The database has to be archive log mode 2) Force Logging should be enabled. 3) The DBNAME will be the same in Primary and standby but DB unique name will be different. 4) Need to alter => ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(DB11G,DB11G_STBY)'; 5) Need to alter log_archive_dest_2 and enable it 6) REMOTE_LOGIN_PASSWORDFILE = { shared | exclusive | none } should exclusive 7) ALTER SYSTEM SET STANDBY_FILE_MANAGEMENT=AUTO; 8) add entries in tnsnames.ora file in primary and standby side. 9) backup primary database. 10) create a standby control file => ALTER DATABASE CREATE STANDBY CONTROLFILE AS '/tmp/db11g_stby.ctl'; 11) CREATE PFILE='/tmp/initDB11G_stby.ora' FROM SPFILE; 12) Amend the PFILE making the entries relevant for the standby database *.db_unique_name='DB11G_STBY' *.fal_server='DB11G' *.log_archive_dest_2='SERVICE=db11g ASYNC VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DB11G' 13) Create the necessary d

ABOUT SHMALL, SHMMAX AND SHMMNI KERNAL PARAMETER

SHMALL => MAXIMUM SHARED MEMORY ALLOCATED THAT THE SERVER CAN USE APART FROM THE PROCESSES UNIX TO RUN. SHMMAX => WHAT SHOULD BE THE MAXIMUM SIZE THAT EACH SHARED MEMORY CAN HAVE IN PHYSICAL RAM. SHMMNI => This parameter sets the system-wide maximum number of shared memory segments. using below we ca check and edit these parameters KERNAL PARAMETERS: LOCATION:  /PROC/SYS/KERNAL cd /prod/sys/kermal   :-) ls -ltr shm* -rw-r--r-- 1 root root 0 May 15 05:46 shmmax -rw-r--r-- 1 root root 0 May 15 09:23 shmmni -rw-r--r-- 1 root root 0 May 15 09:23 shmall sysctl -a | grep shm TO CHANGE KERNAL PARAMETERS: sysctl -w kernal.shmmax="xxxxxx" vi /etc/sysctl.conf  => edit and save the file. run sysctl -p command after editing the file. the changes will affect without restarting the server. SHMMAX and SHMALL are two key shared memory parameters that directly impact the way by which Oracle creates an SGA. While trying to create an SGA during a database startup, Oracle chooses fro

ABOUT HUGE PAGES FOR ORACLE ADMINS

HUGE PAGES:   Overview of HugePages   ## HugePages is a feature integrated into the Linux kernel 2.6. Enabling HugePages makes it possible for the operating system to support memory pages greater than the default (usually 4 KB). ## Using very large page sizes can improve system performance by reducing the number of system resources required to access page table entries. ## Huge Pages is useful for both 32-bit and 64-bit configurations. ## Huge Page sizes vary from 2 MB to 256 MB, depending on the kernel version and the hardware architecture. ## For Oracle Databases, using Huge Pages reduces the operating system maintenance of page states, and increases Translation Lookaside Buffer (TLB) hit ratio.   the command to get Hugepagesize from os prompt 1) grep Hugepagesize /proc/meminfo Hugepagesize:       2048 kB 2) grep Huge /proc/meminfo AnonHugePages:         0 kB HugePages_Total:   14016 HugePages_Free:     6639 HugePages_Rsvd:     6613 HugePages_Surp:        0 Hugepagesize:       2048 k

19C New features for oracle database admins

19C New features: 1) Ability to Create a Duplicate of an Oracle Database Using DBCA in Silent Mode You can now create a duplicate of an Oracle Database by using the createDuplicateDB command of Database Configuration Assistant (DBCA) in silent mode. 2) Ability to Relocate a PDB to Another CDB Using DBCA in Silent Mode You can now relocate a pluggable database (PDB) to another multitenant container database (CDB) by using the relocatePDB command of Database Configuration Assistant (DBCA) in silent mode. This feature enables automating the PDB life cycle operation of relocating a PDB using DBCA in silent mode. 3) Ability to Create a PDB by Cloning a Remote PDB Using DBCA in Silent Mode You can now create a pluggable database (PDB) by cloning a remote PDB using the createFromRemotePDB parameter of the createPluggableDatabase command of Database Configuration Assistant (DBCA) in silent mode. This feature enables automating the PDB life cycle operation of cloning a PDB using DBCA in silent

SOME 12C NEW FEATURES FOR ORACLE DATABASE ADMINS

12C NEW FEATURES: 1) Like sysdba, sysoper & sysasm, we have new privileges, in Oracle 12.1.0.     sysbackup for Backup operations     sysdg for Data Guard operations     syskm for Key management   2) Oracle Database 12c Data Pump will allow turning off redo for the import operation (only).    $ impdp ... transform=disable_archive_logging:y   3) CAN CREATE INVISIBLE COLUMNS AND WE CHANGE IT TO VISIBLE OR INVISIBLE   4) Moving and Renaming datafile is now ONLINE, no need to put datafile in offline.    SQL> alter database move datafile 'path' to 'new_path';   5). Restricting PGA size  - Pre Oracle 12c R1, there was no option to limit and control the PGA size. Although, you set a certain size to PGA_AGGREGATE_TARGET initialization parameter, in 12c we can restrict using PGA_AGGREGATE_LIMIT parameter.   6) ASM: Introduction of Flex ASM, ASM would run only on 3 instances on a cluster even if more than 3, the instances that not have an ASM would request remote ASM, typi

Dynamic remastering ad Dynamic Reconfiguration in ORACLE RAC

Before starting discussion on dynamic remastering, you need to be familiar with Global Resource Directory.  GRD basically contains useful information such as data block address, mode (null, shared or exclusive), role(local or global), SCN, past image and current image. Each instance maintains the portion of GRD in its own SGA. In other words, any node which is master of particular object or a block maintains the GRD portion for those blocks in its SGA. GCS in conjunction with GES is actually responsible for updating GRD for a resource in instance's SGA. The objective of dynamic remastering is to master buffer cache resource on the instance where it's mostly accessed. In Oracle RAC , Each and every data block is mastered by an Instance. That Instance is called Resource Master for that data block. Master Instance keeps track of the status of the data block like which instance is holding instance for this block , in which state it is being held etc.This Information is stored in GR

Monitoring Oracle RAC Database and Cluster Performance.

Monitoring Oracle RAC Database and Cluster Performance. Both Oracle Enterprise Manager Database Control and Oracle Enterprise Manager Grid Control are cluster-aware and provide a central console to manage your cluster database.  From the Cluster Database Home page, you can do all of the following: View the overall system status, such as the number of nodes in the cluster and their current status. This high-level view capability means that you do not have to access each individual database instance for details if you just want to see inclusive, aggregated information. View alert messages aggregated across all the instances with lists for the source of each alert message. An alert message is an indicator that signifies that a particular metric condition has been encountered. A metric is a unit of measurement used to report the system's conditions. Review issues that are affecting the entire cluster and those that are affecting individual instances. Monitor cluster cache coherency sta

Where the statistics are stored in oracle database and views to check STATS details

Where the statistics are stored in oracle database and views to check STATS details? Statistics on tables, indexes, and columns are stored in the data dictionary. To view statistics in the data dictionary, query the appropriate data dictionary view (USER, ALL, or DBA). These DBA_* views include the following: DBA_TABLES DBA_OBJECT_TABLES DBA_TAB_STATISTICS DBA_TAB_COL_STATISTICS DBA_TAB_HISTOGRAMS DBA_INDEXES DBA_IND_STATISTICS DBA_CLUSTERS DBA_TAB_PARTITIONS DBA_TAB_SUBPARTITIONS DBA_IND_PARTITIONS DBA_IND_SUBPARTITIONS DBA_PART_COL_STATISTICS DBA_PART_HISTOGRAMS DBA_SUBPART_COL_STATISTICS DBA_SUBPART_HISTOGRAMS

Explain Snap Shot StandBy database and the use of it

Explain Snap Shot StandBy database and the use of it? Able to open standby database in read-write. This is very useful to turn our standby database into application testing / development purposes. This is achieved through snapshot standby database. In order to create snapshot standby database: 1. Standby database must be a physical standby database 2. Flashback logging must be enabled on both primary and standby database 3. After enabling flashback mode, connect to dgmgrl utility in the primary database: 4. DGMGRL> CONVERT DATABASE STDBY TO SNAPSHOT STANDBY; SQL> alter database convert to snapshot standby; NOTE: Snap Shot StandBy – Is recommended if we have more than one standby database to the primary database. From now on, we can do any testing (like creating new schemas, tables and so on) on our standby database. Please note that at this point of time; all the redo generated in our production database will be still shipped to standby database.  But it is just that not going to

What is active data guard and How to enable stanby database is in read-only mode (Active standby)

What is active data guard and How to enable stanby database is in read-only mode (Active standby)?? If trying open the standby database in read only mode will get the below error. SQL> ALTER DATABASE OPEN READ ONLY; ALTER DATABASE OPEN READ ONLY * ERROR at line 1: ORA-10456: cannot open standby database; media recovery session may be in progress. 1. First we need turf off the log apply process on the standby database. Using the below. DGMGRL> edit database <stdby_dbname> set state=apply-off; SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE CANCEL; SQL> ALTER DATABASE OPEN READ ONLY; SQL>Select name, open_mode, switchover_status, database_role from v$database; Start apply process using Broker (OR) Manually in standby database. DGMGRL> edit database <stdby_dbname> set state=apply-on; SQL> ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT; To disable the active data guard, the steps are the same: 1. DGMGRL> edit databa

What is the use of public, private, VIP in RAC

What is the use of public, private, VIP in RAC? Public IP: The public IP address is for the server.  This is the same as any server IP address, a unique address with exists in /etc/hosts. Private IP: Used for the cluster interconnect. Oracle RAC requires "private IP" addresses to manage the CRS&the cluster ware heartbeat process and the cache fusion layer. Virtual IP (VIP): This IP is required for fail over in case of Node is down. This willsfailoversession to the surviving node. Instead of host ips will have VIP in the listener.ora

Difference between AWR, ADDM and ASH reports

AWR: Automatic Workload Repository gathers, processes and maintains performance stats used for problem detection and self-tuning the databases. Different Components that uses AWR are: Automatic Database Diagnostic Monitor Undo Advisor SQL Tuning Advisor Segment Advisor Different types of AWR Reports for different purposes: For Single Instance Environment: @$ORACLE_HOME/rdbms/admin/awrrpt.sql For Oracle RAC Environment : @$ORACLE_HOME/rdbms/admin/awrgrpt.sql For a particular SQL Statement : @$ORACLE_HOME/rdbms/admin/awrsqrpt.sql For Comparing the reports : @$ORACLE_HOME/rdbms/admin/awrddrpt.sql ADDM: Automatic Database Diagnostic Monitoring Report analyzes the AWR data on a regular basis, to give you overview of the root cause of the problem which is affecting your database’s performance. It also provides suggestions or recommendations for rectifying any problem identified and lists the areas which are having no issues. ADDM recommends multiple solutions for the DBA to choose from which

INTERVIEW QUESTIONS PART-2

·          Difference between awr,ash and addm report? ·          Main things required to configure RAC? ·          Difference Between public, private,vip and scanip? ·          What is active dataguard? ·          Table to find the gap in applying archivelog. ·          12c features? ·          Database tuning? ·          What are the things have to look in AWR? ·          How to derive the size of logical, physical n memory to create a database for oltp and datawarehouse environment and things to consider while creating it? ·          Where the statistics are stored in database? ·          How to monitor RAC? ·          What is the usage of CRS? ·          How to check the availability of the nodes in RAC? ·          What is DB time and CPU time? ·          What are the elements present in the output of TOP command? ·          What is netstat command do? ·          How will you monitor the OS? ·          What is Hugepage? ·          How to configur