Docs

Oracle Database

OGG

MongoDB

SQL Server

Azure

AWS

Linux

Oracle ASH use cases

Overview

Oracle takes a snapshot of active sessions once a second. We can analyze ASH data to solve real world problems, such as sql monitoring, sql stats, sql plan management, and find queries that are not using bind variables.

SQL monitoring

With ASH data, you can monitor the queries that are currently running.

SQL stats

Sometimes, you may want to check the snapshots of the past. You can get the similar data from ASH history table.

SQL plan management

In some cases, the execution plan changes and causes performance issue. Using ASH, we can see when plan changed, which plan is better,which plan is causing issue.

Find queries that are not using bind variables

Some applications use literals instead of bind variables. When the literal values are different, each sql_id is unique, even though they are the same query from the perspective of the application. With the ASH data, you can find a group of queries that share the same signature.

Example Queries

The following query can be used for sql monitoring

alter session set nls_date_format="yyyy-mm-dd hh24:mi:ss";

col sample_time for a30
col machine for a40
col sid for 9999999
col force_matching_signature for 999999999999999999999999
set lines 1000


select v.inst_id, 
       v.sid, 
       v.blocking_session,
       v.blocking_inst_id,
       v.machine, 
       v.osuser, 
       v.schemaname,  
       v.state, 
       v.status, 
       v.sql_id, 
       v.phv, 
       v.force_matching_signature, 
       v.sql_exec_id, 
       v.sql_exec_start, 
       v.sample_time, 
       round((cast(v.sample_time as date) - v.sql_exec_start)*86400) seconds,
       v.session_state, 
       v.event,
       v.program,
       v.sql_text
from (
select 
       h.inst_id, 
       h.session_id sid, 
       h.blocking_session,
       h.blocking_inst_id,
       h.machine, 
       s.osuser, 
       s.schemaname,
       s.state, 
       s.status, 
       h.sql_id,
       h.sql_plan_hash_value phv,
       h.force_matching_signature,
       h.sql_exec_id,
       h.sql_exec_start,
       h.sample_time, 
       h.session_state, 
       h.event, 
       h.program,
       nvl2(t.sql_text, 
           replace(replace(t.sql_text, chr(10), ''), chr(13), ''), '') sql_text,
       row_number() over (partition by 
          h.inst_id, h.session_id, h.session_serial#, 
          h.sql_id, h.sql_exec_id order by h.sample_time desc) r 
from gv$active_session_history h
left join gv$session s on s.sid = h.session_id 
  and s.inst_id = h.inst_id and s.serial# = h.session_serial#
left join gv$sqltext t on t.inst_id = h.inst_id 
  and h.sql_id = t.sql_id and t.piece = 0
where h.sample_time > sysdate - 15/1440
and h.session_type = 'FOREGROUND'
) v
where v.r = 1
and (cast(v.sample_time as date) - v.sql_exec_start)*86400 > 5
order by v.inst_id, v.sample_time desc
;

find best plan using sql execution stats and reports

Optimizer generates execution plans based on the available information at the time the query gets processed, and the plan is not always optimal. In some cases, when the bad plan is used, query execution time is increased significantly, users start to complain that application stops responding. We call it plan regression.

How do you respond when plan regression happens? There are several options available, one of the options is to find the best plan and create a baseline using that plan. In order to find a better plan, you need to find all the plans that are available, and compare the average duration per execution, the best plan has the smallest duration.

How to find the best plan? You can query either DBA_HIST_REPORTS or DBA_HIST_SQLSTAT to get the answer.

col begin_interval_time for a30
col avg for 99999.9
col elapsed for 9999999.9
col sql_profile for a30
select s.snap_id, 
       h.begin_interval_time,
       s.sql_id,              
       s.plan_hash_value,
       s.instance_number,
       round(s.elapsed_time_delta/1000000, 1) elapsed,
       s.executions_delta,
       round(s.elapsed_time_delta/s.executions_delta/1000000, 1) avg,
       s.sql_profile
from dba_hist_sqlstat s 
join dba_hist_snapshot h on s.snap_id = h.snap_id 
and s.instance_number = h.instance_number
where s.sql_id = '&&sql_id'
order by 1
;

Here is a sample output,

"SNAP_ID","BEGIN_INTERVAL_TIME","SQL_ID","PLAN_HASH_VALUE","INSTANCE_NUMBER","ELAPSED","EXECUTIONS_DELTA","AVG","SQL_PROFILE"
12093,"11-JUL-19 11.00.16.124 AM","55f86bjpx6d1z",3685618265,1,"69.3",14,"4.9",
12094,"11-JUL-19 12.00.27.540 PM","55f86bjpx6d1z",3685618265,1,"34.6",9,"3.8",
12095,"11-JUL-19 01.00.09.192 PM","55f86bjpx6d1z",3685618265,1,"96.3",16,"6.0",
12097,"11-JUL-19 03.00.28.744 PM","55f86bjpx6d1z",1492133555,2,"46.8",11,"4.3",
12113,"12-JUL-19 07.00.05.634 AM","55f86bjpx6d1z",536527346,1,"280.4",1,"280.4",
12113,"12-JUL-19 07.00.05.537 AM","55f86bjpx6d1z",536527346,2,"581.3",3,"193.8",
12114,"12-JUL-19 08.00.15.108 AM","55f86bjpx6d1z",536527346,2,"13341.2",6,"2223.5",
12114,"12-JUL-19 08.00.15.221 AM","55f86bjpx6d1z",536527346,1,"10046.4",7,"1435.2",
12115,"12-JUL-19 09.00.25.884 AM","55f86bjpx6d1z",536527346,1,"12495.6",4,"3123.9",
12115,"12-JUL-19 09.00.25.751 AM","55f86bjpx6d1z",536527346,2,"10089.6",4,"2522.4",

Or, you can query DBA_HIST_REPORTS,

set pages 9999
set lines 200
col plan_hash for 99999999999999999
col sql_id for a30
col duration for 9999999999
alter session set nls_date_format = "yyyymmdd-hh24:mi:ss";

select
  r.period_start_time,
  r.period_end_time,
  r.report_id,
  key1 SQL_ID,
  EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/stats/stat[@name="duration"]') duration,
  EXTRACTVALUE(XMLType(report_summary),'/report_repository_summary/sql/plan_hash') plan_hash
from dba_hist_reports r
where component_name= 'sqlmonitor' and key1 = '&&sql_id'
order by 1
;

The output is shown here,

"PERIOD_START_TIME","PERIOD_END_TIME","REPORT_ID","SQL_ID","DURATION","PLAN_HASH"
"20190711-11:56:51","20190711-11:56:55",1152441,"55f86bjpx6d1z","4","3685618265"
"20190711-12:48:43","20190711-12:48:54",1152524,"55f86bjpx6d1z","11","3685618265"
"20190711-13:17:04","20190711-13:17:09",1152574,"55f86bjpx6d1z","5","3685618265"
"20190711-13:17:04","20190711-13:17:09",1152666,"55f86bjpx6d1z","5","1492133555"
"20190711-13:38:56","20190711-13:39:00",1152634,"55f86bjpx6d1z","4","3685618265"
"20190711-15:06:07","20190711-15:06:12",1152851,"55f86bjpx6d1z","5","1492133555"
"20190712-07:54:13","20190712-08:30:30",1154708,"55f86bjpx6d1z","2177","536527346"
"20190712-07:57:07","20190712-08:30:28",1154709,"55f86bjpx6d1z","2001","536527346"
"20190712-07:59:57","20190712-08:33:34",1154737,"55f86bjpx6d1z","2017","536527346"

Optionally, you can generate sql monitor report using the report id, here is the query

set trimspool on 
set trim on 
set verify off
set feedback off
set pages 0 
set linesize 1000 
set long 1000000 
set longchunksize 1000000 

accept report_id prompt 'enter report_id:'

variable b1 VARCHAR2(30);

begin
    :b1 := '&report_id';
end;
/

column filename new_val filename

select 'sqlmon_' || to_char(sysdate, 'dd_hh24miss_') || :b1 || '.html' filename from dual;
SPOOL &filename

SELECT DBMS_AUTO_REPORT.REPORT_REPOSITORY_DETAIL(RID => :b1, TYPE => 'active') FROM dual
;

spool off
 

It is obvious that plan_hash_value 536527346 is the plan that is having performance issue, and the other plans have similar execution time. All you need to do is to load the plan that has the smallest duration as baseline.

load text file into CLOB column

The problem to solve

When you need to load a large text file into a clob column, or you need to load many files, and the files are on the client host, there is no simple insert or update statement you can write to accomplish the task.

To programmatically load text files, I created a python script that generates sql script from text file. Once the sql script is generated, I can run the script from sqlplus to insert or update the CLOB column. You can download the script from this link,

load_clob_from_file

How the program works

The script creates a LOB object, opens the object, reads the text file line by line, converts each line into a procedure call that appends the line into the LOB object, close the object, and updates or inserts the object into a table.

The output of the script is a sql file that can be run any time.

A demo

Here is a text file for demo purposes,

<?xml version="1.0"?>
<catalog>
   <book id="bk101">
      <author>Gambardella, Matthew</author>
      <title>XML Developer's Guide</title>
      <genre>Computer</genre>
      <price>44.95</price>
      <publish_date>2000-10-01</publish_date>
      <description>An in-depth look at creating applications 
      with XML.</description>
   </book>
</catalog>

To generate the sql script, I will run the command

load_clob_from_file.py demo.xml > demo.sql

The output of the script is shown here. You will need to replace the insert or update statement with the actual table and column name.

SPO load_xml.log
SET ECHO ON TERM ON LIN 2000 TRIMS ON NUMF 99999999999999999999
WHENEVER SQLERROR EXIT SQL.SQLCODE;
REM
DECLARE
  l_text clob;
  l_line varchar2(32000);
BEGIN
DBMS_LOB.CREATETEMPORARY(l_text, TRUE);
DBMS_LOB.OPEN(l_text, DBMS_LOB.LOB_READWRITE);
DBMS_LOB.WRITEAPPEND(l_text, 22, q'~<?xml version="1.0"?>~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 10, q'~<catalog>~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 21, q'~   <book id="bk101">~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 30, q'~      <genre>Computer</genre>~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 27, q'~      <price>44.95</price>~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 46, q'~      <publish_date>2000-10-01</publish_date>~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 11, q'~   </book>~'|| CHR(10));
DBMS_LOB.WRITEAPPEND(l_text, 11, q'~</catalog>~'|| CHR(10));
DBMS_LOB.CLOSE(l_text);
-- insert into sales.customer values (4, 'test', sysdate, l_text);
-- update sales.customer set bio = l_text where customer_id = 31;
commit;

END;
/
SPOOL OFF

ORA-65342: source pluggable database has unrecovered transaction

Get this error when cloning a PDB

SQL > CREATE PLUGGABLE DATABASE demo2 FROM demo1;
CREATE PLUGGABLE DATABASE demo2 FROM demo1
*
ERROR at line 1:
ORA-65342: source pluggable database DEMO1 has unrecovered transaction

Find the transaction id

SQL> alter session set container = demo1;
SQL > select local_tran_id, global_tran_id, state from DBA_2PC_PENDING;

LOCAL_TRAN_ID          GLOBAL_TRAN_ID                           STATE
---------------------- ---------------------------------------- -----------
11.16.863152           48801.1A7A385C12868ACD2902               prepared

Commit the transaction

SQL > commit force '11.16.863152';

Commit complete.

After all pending transactions get committed, you should be able to clone PDB.

extended stats

Overview

When column expressions or multiple columns of one table are included in query predicates, optimizer sometimes gets inaccurate cardinality estimates, therefore, generates sub-optimal plans . 

Oracle documentation provides the following description,

DBMS_STATS enables you to collect extended statistics, which are statistics that can improve cardinality estimates when multiple predicates exist on different columns of a table, or when predicates use expressions.

An extension is either a column group or an expression. Column group statistics can improve cardinality estimates when multiple columns from the same table occur together in a SQL statement. Expression statistics improves optimizer estimates when predicates use expressions, for example, built-in or user-defined functions.

The optimizer uses column group statistics for equality predicates, inlist predicates, and for estimating the GROUP BY cardinality.

create column group stats manually

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS( 
    OWNNAME => 'sh',
    TABNAME => 'customers',
    METHOD_OPT => 'FOR ALL COLUMNS SIZE SKEWONLY ' || 
                  'FOR COLUMNS SIZE SKEWONLY (cust_state_province,country_id)' );
END;
/

create column group stats automatically

BEGIN
  DBMS_STATS.SEED_COL_USAGE(
    sqlset_name => null,
    owner_name => null,
    time_limit => 300);
END;
/

--
-- run explain plans for queries in the workload
--

SELECT DBMS_STATS.REPORT_COL_USAGE(
         ownname => 'sh', 
         tabname => 'customers')
FROM   DUAL
;

SELECT DBMS_STATS.CREATE_EXTENDED_STATS(
         ownname => 'sh', 
         tabname => 'customers') 
FROM   DUAL;

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS(
    ownname => 'sh',
    tabname => 'customers');
END;
/

create column expression stats

BEGIN
  DBMS_STATS.GATHER_TABLE_STATS(
    OWNNAME => 'sh', 
    TABNAME => 'customers', 
    METHOD_OPT => 'FOR ALL COLUMNS SIZE SKEWONLY ' || 
                  'FOR COLUMNS (LOWER(cust_state_province)) SIZE SKEWONLY'
  );
END;
/

show extended stats

SELECT DBMS_STATS.SHOW_EXTENDED_STATS_NAME( 
        OWNNAME => 'sh',
        TABNAME => 'customers',
        EXTENSION => '(cust_state_province,country_id)') col_group_name
FROM DUAL
;

SELECT e.EXTENSION expression, 
       t.NUM_DISTINCT, 
       t.HISTOGRAM
FROM   DBA_STAT_EXTENSIONS e, 
       DBA_TAB_COL_STATISTICS t
WHERE  e.EXTENSION_NAME = t.COLUMN_NAME
AND    e.TABLE_NAME = t.TABLE_NAME
;

drop extended stats

BEGIN
  DBMS_STATS.DROP_EXTENDED_STATS( 
    OWNNAME => 'sh', 
    TABNAME => 'customers',
    EXTENSION => '(cust_state_province, country_id)' );
END;
/

BEGIN
  DBMS_STATS.DROP_EXTENDED_STATS(
    OWNNAME => 'sh', 
    TABNAME => 'customers', 
    EXTENSION => '(LOWER(cust_state_province))'
  );
END;
/

Create PDB from non-cdb

Overview

To create PDB from non-cdb, you need to follow these steps,

  • open non-cdb in read only mode
  • generate xml file
  • check compatibility
  • create PDB from xml file
  • run post script, noncdb_to_pdb.sql

Create xml file

$ srvctl stop database -db ORCL -o immediate

SQL> startup nomount
SQL> ALTER DATABASE MOUNT;
SQL> ALTER DATABASE OPEN READ ONLY;

BEGIN
  DBMS_PDB.DESCRIBE(pdb_descr_file => '/tmp/orcl.xml');
END;
/

Check compatibility

SET SERVEROUTPUT ON
DECLARE
  compatible CONSTANT VARCHAR2(3) :=
    CASE DBMS_PDB.CHECK_PLUG_COMPATIBILITY(
      pdb_descr_file => '/tmp/orcl.xml',
      pdb_name => 'NCDB')
    WHEN TRUE THEN 'YES'
    ELSE 'NO' END;
BEGIN
  DBMS_OUTPUT.PUT_LINE(compatible);
END;
/

Create PDB using xml file

$ srvctl stop database -db ORCL -o immediate

SQL> CREATE PLUGGABLE DATABASE ORCLPDB USING '/tmp/orcl.xml';
SQL> alter session set container = ORCLPDB;
SQL> @?/rdbms/admin/noncdb_to_pdb.sql

Oracle resource manager for CDB

Overview

You can create a resouce plan in CDB level or PDB level, or both. This post summarizes how to use resource manager in CDB. The CDB level resource plan controls resouce allocation between individual PDBs, and the PDB level resouce plan controls resouce allocation between consumer groups.

You have two options when creating resouce plan directives, plan directive for individual PDBs or plan directive for performance profiles.

There are three plan directive attributes you can set, shares, utilization_limit, and parallel_server_limit.

When you create resouce plan in PDB level, the following restrictions apply,

 A PDB resource plan cannot have subplans.
 A PDB resource plan can have a maximum of eight consumer groups.
 A PDB resource plan cannot have a multiple-level scheduling policy.

1. memory limits for PDBs

The following parameters control the memory limits for PDBs,
Prerequistes: NONCDB_COMPATIBLE=false, MEMORY_TARGET=0

DB_CACHE_SIZE
SHARED_POOL_SIZE
PGA_AGGREGATE_LIMIT
PGA_AGGREGATE_TARGET
SGA_MIN_SIZE
SGA_TARGET

2. I/O limits for PDBs

The following parameters cannot be set in non-CDB. They can be set in PDBs or root. The PDB level setting overrides setting in root.

MAX_IOPS
MAX_MBPS

3. the common procedures when create/update/delete a plan or plan directive

exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA();
exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();
exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

4. create/update/delete CDB plan

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(
    plan => 'newcdb_plan',
    comment => 'CDB resource plan for newcdb');
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN(
    plan => 'newcdb_plan',
    new_comment => 'CDB plan for PDBs in newcdb');
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.DELETE_CDB_PLAN(
  plan => 'newcdb_plan');
  END;
/

5. create/update/delete resource plan directive for individual PDBs

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
    plan => 'newcdb_plan',
    pluggable_database => 'salespdb',
    shares => 3,
    utilization_limit => 100,
    parallel_server_limit => 100);
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.UPDATE_CDB_PLAN_DIRECTIVE(
    plan => 'newcdb_plan',
    pluggable_database => 'operpdb',
    new_shares => 1,
    new_utilization_limit => 10,
    new_parallel_server_limit => 20);
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.DELETE_CDB_PLAN_DIRECTIVE(
    plan => 'newcdb_plan',
    pluggable_database => 'operpdb');
  END;
/

6. create/update/delete resource plan directive for performance profiles

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(
    plan => 'newcdb_plan',
    profile => 'gold',
    shares => 3,
    utilization_limit => 100,
    parallel_server_limit => 100);
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(
    plan => 'newcdb_plan',
    profile => 'silver',
    shares => 2,
    utilization_limit => 40,
    parallel_server_limit => 40);
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.UPDATE_CDB_PROFILE_DIRECTIVE(
    plan => 'newcdb_plan',
    profile => 'copper',
    new_shares => 1,
    new_utilization_limit => 10,
    new_parallel_server_limit => 20);
  END;
/

BEGIN
  DBMS_RESOURCE_MANAGER.DELETE_CDB_PLAN_DIRECTIVE(
    plan => 'newcdb_plan',
    profile => 'operpdb');
  END;
/

7. update the default directive for PDBs


BEGIN
  DBMS_RESOURCE_MANAGER.UPDATE_CDB_DEFAULT_DIRECTIVE(
    plan => 'newcdb_plan',
    new_shares => 1,
    new_utilization_limit => 50,
    new_parallel_server_limit => 50);
END;
/

8. update the autotask directive

BEGIN
  DBMS_RESOURCE_MANAGER.UPDATE_CDB_AUTOTASK_DIRECTIVE(
    plan => 'newcdb_plan',
    new_shares => 1,
    new_utilization_limit => 75,
    new_parallel_server_limit => 75);
END;
/

9. enable/disable resource manager

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'newcdb_plan';
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'salespdb_plan';
ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = '';

10. monitoring resource manager

DBA_CDB_RSRC_PLAN_DIRECTIVES
V$RSRCPDBMETRIC
V$RSRCPDBMETRIC_HISTORY
DBA_HIST_RSRC_PDB_METRIC

tns lookup

A TNS name is a logical name that the users and applications use for connecting to the database. The name could be an alias to a service name, which is defined on a pluggable database, which is plugged into a container database. As a DBA, when you manage many databases, it could be a challenge to remember which container database and which pluggable database a TNS name is pointing to.

This script will help you to lookup a TNS name, and return the details of the database that it is pointing to: pluggable database name, container database name, service name, and host name.

pre-requsites:
You need to have a oracle client, and you to set the same password for user dbsnmp in all databases.

#!/bin/bash

function lookup
{
  if ((${#ORACLE_HOME} < 1));
  then
    export ORACLE_HOME=/u01/app/oracle/product/12.2.0.1/dbhome_1
    export PATH=$PATH:$ORACLE_HOME/bin
  fi 
  sql="select '|' || sys_context('USERENV','DB_UNIQUE_NAME')|| '|' ||" 
  sql="${sql} sys_context('USERENV','SERVICE_NAME')|| '|' ||" 
  sql="${sql} regexp_substr(sys.database_name, '^[[:alnum:]]+') || '|' ||" 
  sql="${sql} regexp_substr(host_name, '^[[:alnum:]]+') info from v\$instance;"
  result=$(echo ${sql} | sqlplus -S dbsnmp/passwd@${db_name})
  if [[ ${result} == *"Invalid option"* ]];
  then
    echo "${db_name} cannot be reached, name is invalid or service is down"
  else
    database=$(echo ${result} | cut -d"|" -f2)
    service=$(echo ${result} | cut -d"|" -f3)
    pdb=$(echo ${result} | cut -d"|" -f4)
    host=$(echo ${result} | cut -d"|" -f5)
    if [ ${database} != ${pdb} ]; then
      echo "pluggable: ${pdb}"
    fi
    echo "database : ${database}"
    echo "service  : ${service}"
    echo "host_name: ${host}"
  fi
}

if [ $# -eq 1 ];
then
  db_name=$1
  lookup
else
  echo "usage: $0 name"
fi

Here is an exmaple,


$ tnslookup sales_dev

pluggable: SALES01
database : ORCL2
service  : SDEV.DOMAIN.COM
host_name: server01

build physical standby from active database – rman script

Overview

You need to prepare primary database and standby instance before continue the steps in this document.

This document assumes the primary database is in archivelog mode and enabled force logging.

Backup Primary

Here is the rman script for backing up primary

run
{
sql "alter system switch logfile";
backup database;
backup archivelog all;
backup current controlfile for standby;
}

rman target / catalog=username/passwd@${rman_db} cmdfile=${script_file} log=${log_folder}/${log_file}

Duplicate standby

run
{
ALLOCATE CHANNEL dd1 TYPE DISK;
ALLOCATE CHANNEL dd2 TYPE DISK;
ALLOCATE auxiliary CHANNEL dd3 TYPE SBT_TAPE parms='BLKSIZE=1048576,SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so,ENV=(STORAGE_UNIT=,BACKUP_HOST=,ORACLE_HOME=)';
ALLOCATE auxiliary CHANNEL dd4 TYPE SBT_TAPE parms='BLKSIZE=1048576,SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so,ENV=(STORAGE_UNIT=,BACKUP_HOST=,ORACLE_HOME=)';
ALLOCATE auxiliary CHANNEL dd5 TYPE SBT_TAPE parms='BLKSIZE=1048576,SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so,ENV=(STORAGE_UNIT=,BACKUP_HOST=,ORACLE_HOME=)';
ALLOCATE auxiliary CHANNEL dd6 TYPE SBT_TAPE parms='BLKSIZE=1048576,SBT_LIBRARY=${ORACLE_HOME}/lib/libddobk.so,ENV=(STORAGE_UNIT=,BACKUP_HOST=,ORACLE_HOME=)';
duplicate target database for standby from active database nofilenamecheck dorecover;
}

rman target sys/passwd@${primary_db} auxiliary=sys/passwd@${standby_db} catalog=username/passwd@${rman_db} cmdfile=${script_file} log=${log_folder}/${log_file}