Quantcast
Channel: Oracle DBA's Blog
Viewing all 32 articles
Browse latest View live

TIP to change the SQL> prompt.

$
0
0


1) Go to ORACLE_HOME/sqlplus/admin.
2) Add this command to glogin.sql file.      set sqlprompt '_USER>';
3) Connect to database.



############################     DEMO   ############################

SQL> exit;


[oracle@demo]$ cd $ORACLE_HOME/sqlplus/admin
[oracle@demo]$ vi glogin.sql

set sqlprompt '_USER>';


[oracle@demo]$ sqlplus dbauser/password

SQL*Plus: Release 11.2.0.3.0 Production on Fri Aug 23 16:28:08 2013
Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


DBAUSER>



###################################################################
SQL PROMPT OPTIONS

_connect_identifier    will display connection identifier.
_date                  will display date.
_editor                will display editor name used by the EDIT command.
_o_version             will display Oracle version.
_o_release             will display Oracle release.
_privilege             will display privilege such as SYSDBA, SYSOPER, SYSASM
_sqlplus_release       will display SQL*PLUS release.
_user                  will display current user name.





MySQL Standby Creation (Master - Slave Replication)

$
0
0


Contents

. 0

MySQL standby Creation (Master - Slave Replication). 2

Environment Details:2

Configuration Steps on Master Node. 2

Step 1: Mandatory  parameters (/etc/my.cnf). 2
Step 2: User Creation. 2
Step 3: Master Status. 3

Configuration Steps on Slave Node. 3

Step 4: Mandatory Parameters (/etc/my.cnf). 3
Step 5: Connection Testing (Slave to Master). 3
Step 6:  Configuration of Slave process. 3
Step 7:  Start Slave. 4
Step 8: Slave Status. 4
Step 9: Test the Replication. 6

Switchover (Slave to Master). 6

    Master Node. 6
    Slave Node. 6



MySQL standby Creation (Master - Slave Replication)

Environment Details:


Master Node
10.0.0.81
Master Node Hostname              
mysqlnode1.sukumar.com


Slave Node
10.0.0.82
Slave Node Hostname
mysqlnode2.sukumar.com

Configuration Steps on Master Node

Step 1: Mandatory  parameters (/etc/my.cnf)

                                                                       
Below Listed parameters are mandatory for Master node.  Make sure to set the unique server-id

[mysql@mysqlnode1 ~]$ cat /etc/my.cnf
[mysqld]
log-bin=/var/lib/mysql/mysql-bin
max_binlog_size=4096
binlog_format=row
socket=/var/lib/mysql/mysql.sock
server-id=1
binlog_do_db=demo
binlog-ignore-db=mysql
binlog-ignore-db=test

[client]
socket=/var/lib/mysql/mysql.sock

[mysqld_safe]
err-log=/var/log/mysqld-node1.log

Step 2: User Creation


Connect to Mysql and create dedicated user for replication. Grant the required privileges to connect from slave node.

mysql> grant replication slave on *.* to
    -> 'rep_user'@'10.0.0.82' identified by 'rep_user';
Query OK, 0 rows affected (0.00 sec)

mysql> select user, host from mysql.user
    -> where user='rep_user';
+----------+-----------+
| user     | host      |
+----------+-----------+
| rep_user | 10.0.0.82 |
+----------+-----------+
1 row in set (0.00 sec)

Step 3: Master Status


 Check the master status

mysql> show master status;
+------------------+----------+--------------+----------- --+-----------------+
| File         | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
+------------------+----------+--------------+--------------+-----------------+
| mysql-bin.000005 |      120 | demo         | mysql,test   |                  
+------------------+----------+--------------+--------------+-----------------+
1 row in set (0.00 sec)

Configuration Steps on Slave Node

Step 4: Mandatory Parameters (/etc/my.cnf)


 Below Listed parameters are mandatory for Slave node.  Make sure to set the unique server-id

mysql@mysqlnode2 ~]$ cat /etc/my.cnf
[mysqld]
log-bin=/var/lib/mysql/mysql2-bin
max_binlog_size=4096
binlog_format=row
socket=/var/lib/mysql/mysql.sock
server-id=2

[client]
socket=/var/lib/mysql/mysql.sock

Step 5: Connection Testing (Slave to Master)


Test the connection from slave node to Master node by using the below command.

mysql@mysqlnode2 ~]$ mysql -u rep_user -h mysqlnode1.sukumar.com -prep_user demo

Step 6:  Configuration of Slave process


This will configure slave and server will remember settings, so this replaces my.cnf settings in latest versions of MySQL server.
Note: Set the values appropriate with respect to the Master Node. Master_log_file and position values should be from master status on master node.

mysql@mysqlnode2 ~]$ mysql -u root -pwelcome123 demo

mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.81',
       MASTER_USER='rep_user',
       MASTER_PASSWORD='rep_user',
       MASTER_PORT=3306,
       MASTER_LOG_FILE='mysql-bin.000005',
       MASTER_LOG_POS=120,
       MASTER_CONNECT_RETRY=10;

Step 7:  Start Slave


Start the slave process with the below command.

mysql> start slave;
ERROR 1872 (HY000): Slave failed to initialize relay log info structure from the repository

If you are receiving “ERROR 1872 (HY000): Slave failed to initialize relay log info structure from the repository” error. please reset the slave and proceed with step 6.

mysql> reset slave;
Query OK, 0 rows affected (0.00 sec)

mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.81', MASTER_USER='rep_user', MASTER_PASSWORD='rep_user', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=120, MASTER_CONNECT_RETRY=10;
Query OK, 0 rows affected, 2 warnings (0.05 sec)

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

Step 8: Slave Status


Make sure to check the below parameter status should be "YES" and the remaining values are appropriate.

             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes


mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.0.81
                  Master_User: rep_user
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 120
               Relay_Log_File: mysqlnode2-relay-bin.000002
                Relay_Log_Pos: 283
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 120
              Relay_Log_Space: 461
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 8a701d66-2119-11e3-9ab2-0689f6cf2c77
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0

Step 9: Test the Replication


Perform the transactions on master node and slave node will automatically be in sync.

Switchover (Slave to Master)


Master Node


FLUSH LOGScloses and reopens all log files. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file.

FLUSH LOGS;

Slave Node


Stop the slave process and reset the master. This will configure the master and it act according to my.cnf configuration settings.

STOP SLAVE;
RESET MASTER;

Degree of Parallelism - 11gR2 feature

$
0
0

 Pre-Requisite

SQL> sho parameter parallel_degree
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_limit                string      CPU
parallel_degree_policy               string      MANUAL


If Parallel_servers_target is less than parallel_max_servers, parallel statement queuing can occur, if not,
it will not because the parallel_servers_target limit will be reached before Auto DOP queuing logic kicks in.
SQL> sho parameter parallel_servers_target
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_servers_target              integer     16


SQL> sho parameter parallel_max_servers
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers                 integer     40

Test Table Creation

Created a table test_dop with dba_objects data.
SQL> sho parameter parallel_degree_policy
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy               string      MANUAL

SQL> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------
Plan hash value: 381934326

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |     1 |   207 |   298   (0)| 00:00:04 |
|   1 |  TABLE ACCESS FULL| TEST_DOP |     1 |   207 |   298   (0)| 00:00:04 |
------------------------------------------------------------------------------
Note
-----
   - dynamic sampling used for this statement (level=2)

Explain plan with Parallel hint

Check the Explain Plan by passing the Parallel Hint.
SQL> explain plan for select /*+ parallel (test_dop,8) */ * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 365997929

---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |     1 |   207 |    41   (0)| 00:00:01 |
|   1 |  PX COORDINATOR      |          |       |       |            |          |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |     1 |   207 |    41   (0)| 00:00:01 |
|   3 |    PX BLOCK ITERATOR |          |     1 |   207 |    41   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| TEST_DOP |     1 |   207 |    41   (0)| 00:00:01 |
---------------------------------------------------------------------------------

  - dynamic sampling used for this statement (level=2)


Explain plan with DOP


Check the Explain Plan by enabling the DOP, but received an error.
SQL> alter system set parallel_degree_policy=auto;

System altered.

SQL> sho parameter parallel_degree_policy

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy               string      AUTO


SQl> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
Plan hash value: 381934326

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |     1 |   207 |   298   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| TEST_DOP |     1 |   207 |   298   (0)| 00:00:01 |
------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)
   - automatic DOP: skipped because of IO calibrate statistics are missing


Stats Collection

Collect the schema stats and tried again. (Number of Rows has been increased)
SQL>  EXEC DBMS_STATS.GATHER_TABLE_STATS ('sukku', 'test_dop');

PL/SQL procedure successfully completed.

 
SQl> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
Plan hash value: 381934326
------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |    51 |  4590 |   298   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| TEST_DOP |    51 |  4590 |   298   (0)| 00:00:01 |
------------------------------------------------------------------------------

Note
-----
   - automatic DOP: skipped because of IO calibrate statistics are missing

There is no use of enabling the DOP as it is still skipping it because of IO calibrate statistics are missing.
  

IO Calibration

 Make sure IO_CALIBRATION_STATUS should be READY.
SQL> select status from V$IO_CALIBRATION_STATUS;

STATUS
-------------
NOT AVAILABLE

Make sure the below parameter settings, to make the IO_CALIBRATION_STAUS ready.
disk_asynch_io = true
filesystemio_options = asynch


SQL> sho parameter disk_asynch_io

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
disk_asynch_io                       boolean     TRUE

SQL> sho parameter filesystemio_options

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      none


SQL> alter system set filesystemio_options=asynch scope=spfile;

System altered.


Database Restart

Bounce the DB as it is mandatory for this parameter change.
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.
Database mounted.
Database opened.


SQL> sho parameter filesystemio_options

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      ASYNCH


Use the below PL-SQL code to change the IO_CALIBRATION_STATUS to READY
SET SERVEROUTPUT ON
DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
-- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (<DISKS>, <MAX_LATENCY>, iops, mbps, lat);
   DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);

  DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
  DBMS_OUTPUT.PUT_LINE ('max_mbps = ' || mbps);
end;
/

Output should be like below.
max_iops = 89
latency  = 10
max_mbps = 38

PL/SQL procedure successfully completed.


SQL> select status from v$IO_CALIBRATION_STATUS;

STATUS
-------------
READY


DOP enabled

Without passing the parallel hint, we have achieved the Query parallelism by enabling the DOP.

SQL> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
Plan hash value: 365997929
---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    51 |  4590 |   166   (0)| 00:00:01 |
|   1 |  PX COORDINATOR      |          |       |       |            |          |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |    51 |  4590 |   166   (0)| 00:00:01 |
|   3 |    PX BLOCK ITERATOR |          |    51 |  4590 |   166   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| TEST_DOP |    51 |  4590 |   166   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Oracle GoldenGate Replication (Oracle to Oracle) - RAC Database

$
0
0


Operating System           : RHEL5 - 64bit.
Database                          : Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - Production



STEP 1 :- Installation of GoldenGate Software by unzipping/untar the tar file.
**************************** SOURCE   ***********************
[oracle@rac3 goldengate]$ cd /home/oracle/gg

[oracle@rac3 gg]$ ls
fbo_ggs_Linux_x86_ora11g_32bit.tar

[oracle@rac3 gg]$ tar -xvf fbo_ggs_Linux_x86_ora11g_32bit.tar

****************************  TARGET   ***********************

[oracle@rac3 goldengate]$ cd /home/oracle/gg

[oracle@rac3 gg]$ ls
fbo_ggs_Linux_x86_ora11g_32bit.tar

[oracle@rac3 gg]$ tar -xvf fbo_ggs_Linux_x86_ora11g_32bit.tar

STEP 2:- It is mandatory to setup the Path, LD_Library_path to execute ggsci command else you will receive the highlighted error below.

 ****************************  SOURCE   ***********************

[oracle@rac3 bin-D_H]$export PATH=$PATH:/home/oracle/gg
[oracle@rac3 bin-D_H]$export LD_LIBRARY_PATH=$ORACLE_HOME/lib/:/home/oracle/gg
[oracle@rac3 bin-D_H]$cd /home/oracle/gg
[oracle@rac3 gg-D_H]$./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO
Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI (rac3.sukku.com) 1>
 

*****************************  TARGET   ***********************

[oracle@rac4 gg-D_H]$./ggsci
./ggsci: error while loading shared libraries: libnnz11.so: cannot open shared object file: No such file or directory

[oracle@rac4 gg-D_H]$export PATH=$PATH:/home/oracle/gg
[oracle@rac4 gg-D_H]$export LD_LIBRARY_PATH=$ORACLE_HOME/lib/:/home/oracle/gg
[oracle@rac4 gg-D_H]$pwd
/home/oracle/gg
[oracle@rac4 gg-D_H]$./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO
Linux, x86, 32bit (optimized), Oracle 11g on Apr 23 2012 08:09:25

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI (rac4.sukku.com) 1>


STEP 3:- it is mandatory to create sub directories by using “create subdirs” command at ggsci prompt.

**************************** SOURCE   ***********************

GGSCI (rac3.sukku.com) 1> create subdirs

Creating subdirectories under current directory /home/oracle/gg

Parameter files                /home/oracle/gg/dirprm: already exists
Report files                   /home/oracle/gg/dirrpt: created
Checkpoint files               /home/oracle/gg/dirchk: created
Process status files           /home/oracle/gg/dirpcs: created
SQL script files               /home/oracle/gg/dirsql: created
Database definitions files     /home/oracle/gg/dirdef: created
Extract data files             /home/oracle/gg/dirdat: created
Temporary files                /home/oracle/gg/dirtmp: created
Stdout files                   /home/oracle/gg/dirout: created


***************************** TARGET   ***********************

GGSCI (rac4.sukku.com) 1> create subdirs

Creating subdirectories under current directory /home/oracle/gg

Parameter files                /home/oracle/gg/dirprm: already exists
Report files                   /home/oracle/gg/dirrpt: created
Checkpoint files               /home/oracle/gg/dirchk: created
Process status files           /home/oracle/gg/dirpcs: created
SQL script files               /home/oracle/gg/dirsql: created
Database definitions files     /home/oracle/gg/dirdef: created
Extract data files             /home/oracle/gg/dirdat: created
Temporary files                /home/oracle/gg/dirtmp: created
Stdout files                   /home/oracle/gg/dirout: created


STEP 4:- create a user at database level to access DB from ggsci.

****************************  SOURCE   ***********************

SQL> grant connect,resource, dba, select any dictionary, select any table, create table, flashback any table, execute on dbms_flashback, execute on utl_file to ggs identified by ggs;

Grant succeeded.

SQL> conn ggs/ggs
Connected.

*****************************  TARGET   ***********************

SQL>  grant connect,resource, dba, select any dictionary, select any table, create table, flashback any table, execute on dbms_flashback, execute on utl_file to ggs identified by ggs;

Grant succeeded.

SQL> conn ggt/ggt
Connected.


 STEP 5:- It is mandatory to login to database to access date from ggsci
 
****************************  SOURCE   ***********************

GGSCI (rac3.sukku.com) 2> dblogin userid ggs, password ggs
Successfully logged into database.

*****************************  TARGET   ***********************

Note: There may be a chance of multiple DB instances running on same node. It is mandatory to export the instance name where we have created a user for GoldenGate.

GGSCI (rac4.sukku.com) 2> dblogin userid ggt, password ggt
ERROR: Unable to connect to database using user ggt. Please check privileges.
ORA-12162: TNS:net service name is incorrectly specified.

[oracle@rac4 gg-D_H]$. oraenv
ORACLE_SID = [oracle] ? ggtestdb2
The Oracle base for ORACLE_HOME=/oraeng/app/oracle/product/11.2.0 is /oraeng/app/oracle/product

GGSCI (rac4.sukku.com) 1> dblogin userid ggt, password ggt
Successfully logged into database.

##############################################################

Note: ggsci command won’t accept ‘;’ at the end of the commands.

GGSCI (rac3.sukku.com) 3> info all;               -----     ; not allowed for ggsci commands.
ERROR: Invalid command.

##############################################################

STEP 6:- To check what are the resources running on goldengate.

****************************  SOURCE   ***********************

GGSCI (rac3.sukku.com) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt  --- by default Manager process will be in stopped status.

MANAGER     STOPPED 
 
Note: Edit the parameter file.
 User authentication is not mandatory in the parameter file, but it is recommended to authenticate.

GGSCI (rac3.sukku.com) 5> edit params mgr
---------- vi editor----
port 7809
userid ggs, password ggs --- (NOT Mandatory)
------------------------

Note: To start the manager process.

GGSCI (rac3.sukku.com) 6> start manager

Manager started.

GGSCI (rac3.sukku.com) 7> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                          

*****************************  TARGET   ***********************
 
[oracle@rac4 dirprm-D_H]$pwd
/home/oracle/gg/dirprm

[oracle@rac4 dirprm-D_H]$vi mgr.prm

port 7809

GGSCI (rac4.sukku.com) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED

Note: To start the manager process.

GGSCI (rac4.sukku.com) 5> start manager

Manager started.


Note: To stop the manager process.

GGSCI (rac4.sukku.com) 6> stop manager
Manager process is required by other GGS processes.
Are you sure you want to stop it (y/n)? yes

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

GGSCI (rac4.sukku.com) 7> start manager

Manager started.

Note: To check the status of manager process.

GGSCI (rac4.sukku.com) 8> status manager

Manager is running (IP port rac4.sukku.com.7809).


GGSCI (rac4.sukku.com) 9>

****************************  SOURCE   ***********************
             --- Initial Load through extract process -------
Note: before starting the initial load replication, make sure that the structure of table exists at Target side.
SOURCEISTABLE” is the keyword/parameter to run the Initial load through GoldenGate.

GGSCI (rac3.sukku.com) 9> ADD EXTRACT initload, SOURCEISTABLE
EXTRACT added.

GGSCI (rac3.sukku.com) 10> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt   ---  Initial load extract process will not show in info all

MANAGER     RUNNING          


GGSCI (rac3.sukku.com) 11> edit params initload

EXTRACT initload
USERID ggs, PASSWORD ggs
RMTHOST rac4, MGRPORT 7809
RMTTASK replicat, GROUP repload
TABLE ggs.t;

 *****************************  TARGET   ***********************

SPECIALRUN” is the keyword to receive the initial load at replicat side.
Extract/Replicat group name must be less than 8 characters.

GGSCI (rac4.sukku.com) 8> ADD REPLICAT initload2, SPECIALRUN
ERROR: Invalid group name (must be at most 8 characters).

GGSCI (rac4.sukku.com) 9> ADD REPLICAT repload, SPECIALRUN
REPLICAT added.

GGSCI (rac4.sukku.com) 10> EDIT PARAMS  repload

REPLICAT repload
USERID ggt, PASSWORD ggt
ASSUMETARGETDEFS
MAP ggs.t, TARGET ggt.t;

****************************  SOURCE   ***********************
Example:  Create a table and insert few records..

SQL> select count(*) from t;

  COUNT(*)
----------
       160

GGSCI (rac3.sukku.com) 23> start extract initload

Sending START request to MANAGER...
EXTRACT INITLOAD starting
 
GGSCI (rac3.sukku.com) 24> info extract initload

EXTRACT    INITLOAD Last Started 2012-09-20 15:17   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint Table GGS.T
                     2012-09-20 15:17:51 Record 1
Task                 SOURCEISTABLE

GGSCI (rac3.sukku.com) 25> info extract initload

EXTRACT    INITLOAD  Last Started 2012-09-20 15:20   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table GGS.T
                     2012-09-20 15:20:22  Record 160
Task                 SOURCEISTABLE

*****************************  TARGET   ***********************

create a table as similar to source..

GGSCI (rac4.sukku.com) 17> info replicat repload

REPLICAT   REPLOAD   Initialized   2012-09-20 15:02   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:18:04 ago)
Log Read Checkpoint  Not Available
Task                 SPECIALRUN


GGSCI (rac4.sukku.com) 18> info replicat repload

REPLICAT   REPLOAD   Initialized   2012-09-20 15:02   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:18:55 ago)
Log Read Checkpoint  Not Available
Task                 SPECIALRUN

SQL> select count(*) from t;

  COUNT(*)
----------
       160


Note: Checkpoint Table is mandatory for normal Extract & Replicat.
GGSCI (rac3.sukku.com) 26> edit params ./GLOBALS
GGSCHEMA ggs
CHECKPOINTTABLE ggs.chkpt

GGSCI (rac3.sukku.com) 27> dblogin userid ggs, password ggs
Successfully logged into database.

GGSCI (rac3.sukku.com) 28> add checkpointtable ggs.chkpt
Successfully created checkpoint table ggs.chkpt.

SQL> select * from tab;

TNAME                          TABTYPE  CLUSTERID
------------------------------ ------- ----------
CHKPT             TABLE
CHKPT_LOX         TABLE
T                 TABLE

SQL> desc chkpt_lox
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP_NAME                                NOT NULL VARCHAR2(8)
 GROUP_KEY                                 NOT NULL NUMBER(19)
 LOG_CMPLT_CSN                             NOT NULL VARCHAR2(129)
 LOG_CMPLT_XIDS_SEQ                        NOT NULL NUMBER(5)
 LOG_CMPLT_XIDS                            NOT NULL VARCHAR2(2000)

SQL> desc chkpt
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 GROUP_NAME                                NOT NULL VARCHAR2(8)
 GROUP_KEY                                 NOT NULL NUMBER(19)
 SEQNO                                              NUMBER(10)
 RBA                                       NOT NULL NUMBER(19)
 AUDIT_TS                                           VARCHAR2(29)
 CREATE_TS                                 NOT NULL DATE
 LAST_UPDATE_TS                            NOT NULL DATE
 CURRENT_DIR                               NOT NULL VARCHAR2(255)
 LOG_CSN                                            VARCHAR2(129)
 LOG_XID                                            VARCHAR2(129)
 LOG_CMPLT_CSN                                      VARCHAR2(129)
 LOG_CMPLT_XIDS                                     VARCHAR2(2000)
 VERSION                                            NUMBER(3)


*****************************  SOURCE    ***********************

threads 2” is for RAC databases, so that Goldengate will be aware of that it should get the data from 2 sources (Node1 & Node2 online redologs).

GGSCI (rac3.sukku.com) 30> add extract ext,tranlog, threads 2, begin now
EXTRACT added.

GGSCI (rac3.sukku.com) 31> add rmttrail /home/oracle/gg/dirdat/rt, extract ext
RMTTRAIL added.

GGSCI (rac3.sukku.com) 32> edit params ext

EXTRAXT ext
USERID ggs, PASSWORD ggs
RMTHOST rac4, MGRPORT 7809
RMTTRAIL /home/oracle/gg/dirdat/rt
TABLE ggs.t;

*****************************  TARGET   ***********************

GGSCI (rac4.sukku.com) 19> add replicat rep, exttrail /home/oracle/gg/dirdat/rt
ERROR: No checkpoint table specified for ADD REPLICAT.

GGSCI (rac4.sukku.com) 20> edit params ./GLOBALS
GGSCHEMA ggt
CHECKPOINTTABLE ggt.chkpt

GGSCI (rac4.sukku.com) 21> dblogin userid ggt, password ggt
Successfully logged into database.

GGSCI (rac4.sukku.com) 22> add checkpointtable ggt.chkpt
Successfully created checkpoint table ggt.chkpt.

GGSCI (rac4.sukku.com) 23> add replicat rep, exttrail /home/oracle/gg/dirdat/rt
REPLICAT added.

GGSCI (rac4.sukku.com) 24> edit params rep
REPLICAT rep
ASSUMETARGETDEFS
USERID ggt, PASSWORD ggt
MAP ggs.t, TARGET ggt.t;

##############################################################

*****************************  SOURCE   ***********************

GGSCI (rac3.sukku.com) 36> start extract ext

Sending START request to MANAGER ...
EXTRACT EXT starting

GGSCI (rac3.sukku.com) 51> info extract ext

EXTRACT    EXT       Initialized   2012-09-20 16:10   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:49 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2012-09-20 16:10:08  Thread 1, Seqno 0, RBA 0
                     SCN 0.0 (0)
Log Read Checkpoint  Oracle Redo Logs
                     2012-09-20 16:10:08  Thread 2, Seqno 0, RBA 0
                     SCN 0.0 (0)


GGSCI (rac3.sukku.com) 52> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                          
EXTRACT     RUNNING     EXT         00:00:00      00:00:58   

---------------------------------------------   without threads 2 option in RAC DB   ------------------------------------------
2012-09-20 15:57:18  ERROR   OGG-00446  Oracle GoldenGate Capture for Oracle, ext.prm:  The number of Oracle redo threads (2) is not the same as the number of checkpoint threads (1). EXTRACT groups on RAC systems should be created with the THREADS parameter (e.g., ADD EXT <group name>, TRANLOG, THREADS 2, BEGIN...).
----------------------------------------------------------------------------------------------------------------------------------

*****************************  TARGET   ***********************

GGSCI (rac4.sukku.com) 29> start replicat rep

Sending START request
REPLICAT REP starting

GGSCI (rac4.sukku.com) 30> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                          
REPLICAT    RUNNING     REP         00:00:00      00:00:00   

GGSCI (rac4.sukku.com) 31> info replicat rep

REPLICAT   REP       Last Started 2012-09-20 16:13   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Log Read Checkpoint  File /home/oracle/gg/dirdat/rt000000
                     First Record  RBA 0
########################################################################################

Note: Unable to replicate.. because redo's are in ASM..

2012-09-20 16:57:33  ERROR   OGG-00446  Oracle GoldenGate Capture for Oracle, ext.prm:  No valid log files for current redo sequence 256, thread 1, error retrieving redo file name for sequence 256, archived = 0, use_alternate = 0Not able to establish initial position for begin time 2012-09-20 16:56:52.
2012-09-20 16:57:33  ERROR   OGG-01668  Oracle GoldenGate Capture for Oracle, ext.prm:  PROCESS ABENDING.

Note: create password file and grant sysasm privilege and added this parameter to read ASM logfiles to fix the above error.

EXTRACT ext
USERID ggs, PASSWORD ggs
RMTHOST rac4, MGRPORT 7809
RMTTRAIL /home/oracle/gg/dirdat/rt
tranlogoptions asmuser sys@asm1, asmpassword sys
TABLE ggs.t;

GGSCI (rac3.sukku.com) 63> stats extract ext

Sending STATS request to EXTRACT EXT ...

Start of Statistics at 2012-09-20 21:21:27.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                         3.00
        Mapped operations                                  3.00
        Unmapped operations                                0.00
        Other operations                                   0.00
        Excluded operations                                0.00

Output to /home/oracle/gg/dirdat/rt:

Extracting from GGS.A to GGS.A:

*** Total statistics since 2012-09-20 18:45:06 ***
        Total inserts                                      1.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Daily statistics since 2012-09-20 18:45:06 ***
        Total inserts                                      1.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

*** Hourly statistics since 2012-09-20 21:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2012-09-20 18:45:06 ***
        Total inserts                                      1.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

Extracting from GGS.GGS_MARKER to GGS.GGS_MARKER:

*** Total statistics since 2012-09-20 18:45:06 ***

        No database operations have been performed.

*** Daily statistics since 2012-09-20 18:45:06 ***

        No database operations have been performed.

*** Hourly statistics since 2012-09-20 21:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2012-09-20 18:45:06 ***

        No database operations have been performed.

Extracting from GGS.GGS_MARKER to GGS.GGS_MARKER:

*** Total statistics since 2012-09-20 18:45:06 ***

        No database operations have been performed.

*** Daily statistics since 2012-09-20 18:45:06 ***

        No database operations have been performed.

*** Hourly statistics since 2012-09-20 21:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2012-09-20 18:45:06 ***

        No database operations have been performed.

Extracting from GGS.H to GGS.H:

*** Total statistics since 2012-09-20 18:45:06 ***
        Total inserts                                      2.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Daily statistics since 2012-09-20 18:45:06 ***
        Total inserts                                      2.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

*** Hourly statistics since 2012-09-20 21:00:00 ***

        No database operations have been performed.

*** Latest statistics since 2012-09-20 18:45:06 ***
        Total inserts                                      2.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

End of Statistics.

Fix Undo Block Corruption

$
0
0

************************************************************

ORA-00604: error occurred at recursive SQL level 1
ORA-01552: cannot use system rollback segment for non-system tablespace 'USERS'
ORA-06512: at line 19

Errors in file/export/home/oracle/admin/df4/dwnon/bdump/dwnon_smon_26175.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00376: file 238 cannot be read at this time
ORA-01110: data file 238: '/u2/df4/oradata/dwnon/undotbs2_dwnon_01.dbf'

************************************************************
Dbv:
The Database Verify utility (dbv) provides a mechanism to validate the structure of Oracle data files at the operating system level.  It should be used on a regular basis to inspect data files for signs of corruption.  
Although it can be used against open data files, the primary purpose of dbv is to verify the integrity of cold datafiles that would be used for a backup.  If used against online datafiles, intermittent errors can occur and the utility should be executed again against the same file to verify accuracy.  The utility can only be used against datafiles however, not control files or archived redo logs. 

bash-2.05$ dbv file=/u2/df4/oradata/dwnon/undotbs2_dwnon_01.dbf blocksize=16384

DBVERIFY: Release 10.2.0.2.0 - Production on Tue Feb 7 12:56:11 2012

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

DBVERIFY - Verification starting: FILE = /u2/df4/oradata/dwnon/undotbs2_dwnon_01.dbf

DBV-00200: Block, dba 998244473, already marked corrupted
DBVERIFY - Verification complete

Total Pages Examined         : 125440
Total Pages Processed (Data) : 0
Total Pages Failing   (Data) : 0
Total Pages Processed (Index): 0
Total Pages Failing   (Index): 0
Total Pages Processed (Other): 121776
Total Pages Processed (Seg)  : 0
Total Pages Failing   (Seg)  : 0
Total Pages Empty            : 3664
Total Pages Marked Corrupt   : 1
Total Pages Influx           : 0
Highest block SCN            : 2468459942 (17.2468459942)
bash-2.05$


Mount Stage:

Step 1:

SQL> select FILE#, NAME, STATUS, ERROR, RECOVER from v$datafile_header
where status <> 'ONLINE';

FILE#      NAME                                      STATUS          ERRORREC
--------   ----------------------------------------  -----------     -----
238       /u2/df4/oradata/dwnon/undotbs2_dwnon_01.dbf  OFFLINE            NO

SQL> alter database recover datafile '/u2/df4/oradata/dwnon/undotbs2_dwnon_01.dbf'

Media Recovery Complete.


SQL> alter database datafile '/u2/df4/oradata/dwnon/sysaux_dwnon_01.dbf' online;

Database altered.

SQL> alter database open;

Note: Database is in open mode and able to connect to schemas but we are not able to perform any DDL and DML operation.

SQL> connect sukku/sukku
Connected.

SQL> Create table test (n number);
Error : 

ORA-00604: error occurred at recursive SQL level 1
ORA-01552: cannot use system rollback segment for non-system tablespace 'USERS'
ORA-06512: at line 19

Errors in file /export/home/oracle/admin/df4/dwnon/bdump/dwnon_smon_26175.trc:
ORA-00604: error occurred at recursive SQL level 1
ORA-00376: file 238 cannot be read at this time
ORA-01110: data file 238: '/u2/df4/oradata/dwnon/undotbs2_dwnon_01.dbf'


SQL> Select segment_name, status from dba_rollback_segs where tablespace_name='UNDOTBS2'
And status = 'NEEDS RECOVERY'; 

SEGMENT_NAME                   STATUS
------------------------------ ----------------
_SYSSMU4$                      NEEDS RECOVERY
_SYSSMU5$                      NEEDS RECOVERY
_SYSSMU6$                      NEEDS RECOVERY
_SYSSMU7$                      NEEDS RECOVERY
_SYSSMU8$                      NEEDS RECOVERY
_SYSSMU9$                      NEEDS RECOVERY
_SYSSMU10$                    NEEDS RECOVERY

7 rows selected.

SQL> Shut Immediate

Note:   If the old segments are online, then they must be taken offline. Once these segments are offline it will be easy to drop old undo tablespace without any exceptions.

SQL>alter rollback segment “_SYSSMU4$” offline;


Step 2:

SQL> Startup nomount;   
SQL> Create pfile='initdwnon.ora' from spfile;
SQL> Shutdown immediate;

Step 3:

Modify parameter file

*.undo_management='MANUAL'
#*.undo_tablespace='UNDOTBS1'
*._OFFLINE_ROLLBACK_SEGMENTS=(_SYSSMU4$,_SYSSMU5$,_SYSSMU6$,_SYSSMU7$,_SYSSMU8$,_SYSSMU9$,_SYSSMU10$)


SQL> Startup mount pfile='initdwnon.ora'
SQL> Alter database open ;

SQL> drop rollback segment "_SYSSMU4$";
Rollback segment dropped.

SQL> drop rollback segment "_SYSSMU5$";
Rollback segment dropped.

SQL> drop rollback segment "_SYSSMU6$";
Rollback segment dropped.

SQL> drop rollback segment "_SYSSMU7$";
Rollback segment dropped.

SQL> drop rollback segment "_SYSSMU8$";
Rollback segment dropped.

SQL> drop rollback segment "_SYSSMU9$";
Rollback segment dropped.

SQL> drop rollback segment "_SYSSMU10$";
Rollback segment dropped.



SQL> drop tablespace UNDOTBS2 including contents and datafiles;
Tablespace dropped.

SQL> CREATE UNDO TABLESPACE "UNDOTBS4"
  DATAFILE'/u2/df4/oradata/dwnon/undotbs04.dbf' SIZE 1024M; 
Tablespace created.


Step 4 :


Modify parameter file

*.undo_management='AUTO'
*.undo_tablespace='UNDOTBS4'

Remove hidden parameter
*._OFFLINE_ROLLBACK_SEGMENTS=(_SYSSMU4$,_SYSSMU5$,_SYSSMU6$,_SYSSMU7$,_SYSSMU8$,_SYSSMU9$,_SYSSMU10$)

SQL> Shutdown immediate;
SQL> Startup nomount; ---> Using spfile
SQL> Create Spfile=’/export/home/oracle/product/10.2/dbs/spfiledwnon.ora’ from
Pfile=’ initdwnon.ora’
SQL> Shutdown immediate;
SQL> Startup





Calculate Archives Logs per day (Hourly)

$
0
0
col day for a10
col thread# format 9999 heading "Thread"
break on thread# skip 2;
set lines 500
set pages 300
set trimspool on

select thread#, to_char(first_time,'YYYY-MM-DD') day,
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'00',1,0)),'999') "00",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'01',1,0)),'999') "01",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'02',1,0)),'999') "02",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'03',1,0)),'999') "03",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'04',1,0)),'999') "04",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'05',1,0)),'999') "05",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'06',1,0)),'999') "06",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'07',1,0)),'999') "07",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'08',1,0)),'999') "08",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'09',1,0)),'999') "09",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'10',1,0)),'999') "10",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'11',1,0)),'999') "11",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'12',1,0)),'999') "12",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'13',1,0)),'999') "13",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'14',1,0)),'999') "14",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'15',1,0)),'999') "15",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'16',1,0)),'999') "16",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'17',1,0)),'999') "17",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'18',1,0)),'999') "18",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'19',1,0)),'999') "19",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'20',1,0)),'999') "20",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'21',1,0)),'999') "21",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'22',1,0)),'999') "22",
to_char(sum(decode(substr(to_char(first_time,'HH24'),1,2),'23',1,0)),'999') "23",
count(*) Total
from v$log_history
where first_time > sysdate - &go_back
group by thread#, to_char(first_time,'YYYY-MM-DD') order by 2 ;

Enter the value for how many days you want to check the archives generation per day in hourly basis.

Here is the sample output for the above query.


Thread DAY        00   01   02   03   04   05   06   07   08   09   10   11   12   13   14   15   16   17   18   19   20   21   22   23        TOTAL
------ ---------- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----------
     1 2012-10-07    0    0    0    0    0    0    0    0    0    0    0    0    0    2    2    2    2    2    2    2    2    2    2    2         22


     2 2012-10-07    0    0    0    0    0    0    0    0    0    0    0    0    0    2    2    2    2    2    2    2    2    2    2    2         22


     1 2012-10-08    2    2    2    2    2    2    2    2    2    2    2    2    1    0    0    0    0    0    0    0    0    0    0    0         25


     2 2012-10-08    2    2    2    2    2    2    2    2    2    2    2    2    1    0    0    0    0    0    0    0    0    0    0    0         25




Installation of Oracle-12C database

$
0
0



 

Create groups, user and directories


#groupadd -g 54321 oinstall
#groupadd -g 54322 dba
#groupadd -g 54323 oper
#useradd -u 54321 -g oinstall -G dba,oper oracle
#passwd xxxxxx

#mkdir -p /u01/app/oracle/product/12.1.0
#chown -R oracle:oinstall /u01
#chmod -R 775 /u01

Pre-Requisite check for 12CR1 installation

Installation of required RPMs:

  1. Automatic  Process
All necessary prerequisites will be performed automatically. It is probably worth doing a full update as well. The simple "yum update" command will set the oracle recommended values to sysctl.conf and limit.conf files.
# yum update

  1. Manual Process
Add the following lines to the "/etc/sysctl.conf" file.

kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.hostname = localhost.localdomain
fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500

Add the following lines to the "/etc/security/limits.conf" file.

oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    2047
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768

Install the following packages if they are not already present.
# From Public Yum or ULN

yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y

Installation Of Database Software


$unzip linuxamd64_12c_database_1of2.zip
$unzip linuxamd64_12c_database_2of2.zip

ü  Unzipping of above Zip files will create a database directory under the software location folder.
ü  Make sure sufficient Temp size and Swap Memory is available.
ü  Execute the runInstallerfile as an ORACLE user.


After executing the runInstaller, It will prompt for continue with the installation. Please proceed with option ‘Y’
continuing with the installation,

Continue? (y/n) [n] Y


Step 1: Configure Security updates

  • Dis-select the security update checkbox and click NEXT.



 Click yes and proceed to next step.

 
Step 2:Software Updates.
  • Select Skip Software Updates and proceed to next step.
 

Step 3: Installation Option

  • If you want to create the database along with the installation select the first option.
  • If you want to just install the database software alone, select the second option. 
  •  If you want to upgrade an existing database proceed with third option.


Click Next and proceed Further.


Step 4: System Class

  • Select server class and proceed to next step




Step 5: Grid Installation Options
 
  • Please select the option depends on your requirement and proceed further.



Step 6: Install Type

  • Select Typical Install and proceed to next step.





Step 7: Typical Installation

  • Fill the details and proceed further



Step 8: Create Inventory


  • Simply Click Next and Proceed
  

 

Step 9: Prerequisite Check

  • Please wait until completion of prerequisite check and click next.




  
Step 10:  Ignore the SWAP related warning and proceed further.



Step 11: Summary

  •  Before Installation of DB software, please go through all the option and click edit if you want to change any settings else click INSTALL button to proceed installation.





Step 12: Install Product
  

 
  • Run the root.sh files as a root user.
 

Installation of Database.





  • If you click on Password Management you will manage the default user accounts.  





Step 13 :  Install Product.

  • Installation of DB is complete by clicking the next.



Post Installation Configuration Steps.


Set the values in .bash_profile
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0
export ORACLE_SID=odb12c

export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


Listener Configuration:                $ORACLE_HOME/network/admin/listener.ora
ODB12C =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = odb12c.appsassociates.com)(PORT = 1521))
    )
  )

SID_LIST_ODB12C =
  (SID_LIST =
    (SID_DESC =
      (ORACLE_HOME= /u01/app/oracle/product/12.1.0)
      (SID_NAME = odb12c)
    )
  )

TNS Configuration:  $ORACLE_HOME/network/admin/tnsnames.ora
ODB12C=
        (DESCRIPTION=
                (ADDRESS=(PROTOCOL=tcp)(HOST=odb12c.appsassociates.com)(PORT=1521))
            (CONNECT_DATA=
                (SID=ODB12C)
            )
        )

Working with 12c:



[oracle@odb12c ~]$ sqlplus / as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Sun Jun 30 14:52:50 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select status from v$instance;

STATUS
------------------------------------
OPEN

SQL> select name from v$database;

NAME
---------------------------
ODB12C

ü  User Creation
SQL> grant connect, resource to TEST identified by TEST;
grant connect, resource to TEST identified by TEST
*
ERROR at line 1:
ORA-65049: creation of local user or role is not allowed in CDB$ROOT

ü  Here is the syntax for user Creation
SQL> grant connect, resource to C##TEST identified by TEST container=all;

Grant succeeded.

ü  Query to check the DB Details.

COLUMN "DB DETAILS" FORMAT A100
SELECT
 'DB_NAME: ' ||sys_context('userenv', 'db_name')||
 ' / CDB?: ' ||(select cdb from v$database)||
 ' / AUTH_ID: ' ||sys_context('userenv', 'authenticated_identity')||
 ' / USER: ' ||sys_context('userenv', 'current_user')||
 ' / CONTAINER: '||nvl(sys_Context('userenv', 'con_Name'), 'NON-CDB')
 "DB DETAILS"
 FROM DUAL
 /

DB DETAILS
-------------------------------------------------------------------------------------------------
DB_NAME: odb12c / CDB?: YES / AUTH_ID: oracle / USER: SYS / CONTAINER: CDB$ROOT



ü  Query to check the DB Details from v$PDBS

SQL> select NAME, CON_ID, DBID, OPEN_TIME, OPEN_MODE from v$pdbs;

NAME                     CON_ID       DBID OPEN_TIME                       OPEN_MODE
-------------------- ---------- ---------- ------------------------------- -------------------
PDB$SEED                      2 4062010165 30-JUN-13 01.05.45.950 AM       READ ONLY
PDB12C                        3  471577855 30-JUN-13 01.07.45.831 AM       READ WRITE


ü  Query to check the Datafile Details from cdb_data_files

col TABLESPACE_NAME for a20
col FILE_NAME for a80
select      con_id,
      tablespace_name,
      file_Name
from  cdb_data_files
order by 1, 2;


    CON_ID TABLESPACE_NAME      FILE_NAME
---------- -------------------- -----------------------------------------------------------
         1 SYSAUX               /u01/app/oracle/oradata/odb12c/sysaux01.dbf
         1 SYSTEM               /u01/app/oracle/oradata/odb12c/system01.dbf
         1 UNDOTBS1             /u01/app/oracle/oradata/odb12c/undotbs01.dbf
         1 USERS                /u01/app/oracle/oradata/odb12c/users01.dbf
         2 SYSAUX               /u01/app/oracle/oradata/odb12c/pdbseed/sysaux01.dbf
         2 SYSTEM               /u01/app/oracle/oradata/odb12c/pdbseed/system01.dbf
         3 EXAMPLE              /u01/app/oracle/oradata/odb12c/pdb12c/example01.dbf
         3 SYSAUX               /u01/app/oracle/oradata/odb12c/pdb12c/sysaux01.dbf
         3 SYSTEM               /u01/app/oracle/oradata/odb12c/pdb12c/system01.dbf
         3 USERS                /u01/app/oracle/oradata/odb12c/pdb12c/SAMPLE_SCHEMA_users01.dbf

10 rows selected.

CON_ID 1: Normal Database
CON_ID 2: Container Database
CON_ID 3: Pluggable Database

ü  Here are some queries with new Data Dictionary Views.

SQL> SELECT pdb FROM dba_services;
SQL> SELECT sys_context('userenv','con_name') "MY_CONTAINER" FROM dual;
SQL> SHOW con_name
SQL> SELECT NAME, CON_ID FROM v$active_services ORDER BY 1;

SQL Behaviour with the SQL_ID

$
0
0

You can get the behavior of SQL query with SQL ID


set linesize 200
set pagesize 200

col BEGIN_INTERVAL_TIME format A28
col END_INTERVAL_TIME format A28
col SNAP_ID format 999999 heading "Snap"
col PLAN_HASH_VALUE format 99999999999 heading "Plan|Hash"
col TOTAL_CNT noprint
col EXECUTIONS_DELTA format 999,999,999 heading "Executions"
col ELAPSED_TIME_DELTA format 999,999 heading "Total|Elapsed|(secs)"
col BUFFER_GETS_DELTA format 999,999,999 heading "Buffer Gets"
col DISK_READS_DELTA format 999,999,999 heading "Disk|Reads"
col CPU_TIME_DELTA format 999,999 heading "Total CPU|(secs)"
col IOWAIT_DELTA format 999,999,999,999 heading "IO|Delta"
col ELAPSED_PER_EXECUCTION format 9,999.9 heading "Elapsed|/Exec|(secs)"
col GETS_PER_EXECUCTION format 999,999,999 heading "Buffer|Gets|/Exec"
col CPU_PER_EXECUCTION format 999.99 heading "CPU Time|/Exec|(secs)"
col DISKREADS_PER_EXECUCTION format 999,999,999 heading "Disk Reads|/Exec"
col ROWS_PROCESSED_DELTA format 999,999,999 heading "Rows|Processed"
col ROWS_PROCESSED_PER_EXECUTION format 999,999 heading "Rows|/Exec"
col IO_PER_EXECUCTION format 999,999,999 heading "IO|/Exec"
col REPORT_TIME format A15
col SNAP_ID_FOUND format A12 heading "First/Last|snap|found in"
col SPACER format A5 heading ''
col INSTANCE_NUMBER format 9 heading "I" print
set linesize 200
SELECT B.INSTANCE_NUMBER, B.BEGIN_INTERVAL_TIME, B.END_INTERVAL_TIME, PLAN_HASH_VALUE, A.EXECUTIONS_DELTA, A.DISK_READS_DELTA, A.BUFFER_GETS_DELTA,
round(A.ROWS_PROCESSED_DELTA / DECODE(A.EXECUTIONS_DELTA,0,1, A.EXECUTIONS_DELTA)) ROWS_PROCESSED_PER_EXECUTION,
round(A.DISK_READS_DELTA / DECODE(A.EXECUTIONS_DELTA,0,1, A.EXECUTIONS_DELTA)) diskreads_per_execuction,
round(A.BUFFER_GETS_DELTA / DECODE(A.EXECUTIONS_DELTA,0,1, A.EXECUTIONS_DELTA)) gets_per_execuction,
round(( A.ELAPSED_TIME_DELTA / 1000 / 1000) / DECODE(A.EXECUTIONS_DELTA,0,1, A.EXECUTIONS_DELTA),1) elapsed_per_execuction
from DBA_HIST_SQLSTAT A, DBA_HIST_SNAPSHOT B
where A.SNAP_ID = B.SNAP_ID
and A.INSTANCE_NUMBER = B.INSTANCE_NUMBER
and A.SQL_ID = '&SQL_ID'
and B.BEGIN_INTERVAL_TIME >= sysdate - &GO_BACK
order by B.BEGIN_INTERVAL_TIME, B.INSTANCE_NUMBER;


Please pass the SQL_ID and GO_BACK (for how many days you want to check the SQL behaviour)

Here is an example to pass the values.

Enter value for sql_id: 1fkh93md0802n
Enter value for go_back: 3





DB Clone using RMAN duplicate database command

$
0
0



Set the Required Mandatory Parameters

Source DB   -- db_name  = VIS
Clone DB    -- db_name  = VIS1

Source DB   -- control_files = /u02/VIS/visdata/cntrl01.dbf,/u02/VIS/visdata/cntrl02.dbf,/u02/VIS/visdata/cntrl03.dbf
Clone DB    -- control_files = /VIS1/visdata/cntrl01.dbf,/VIS1/visdata/cntrl02.dbf,/VIS1/visdata/cntrl03.dbf

Source DB   -- diagnostic_dest  = /u02/VIS/visdb/11.2.0.3/admin/VIS_ebs11i
Clone DB    -- diagnostic_dest  = /VIS1/visdb/11.2.0.3/admin/VIS1_ebs11i

Source DB   -- core_dump_dest   = /u02/VIS/visdb/11.2.0.3/admin/VIS_ebs11i/cdump
Clone DB    -- core_dump_dest   = /VIS1/visdb/11.2.0.3/admin/VIS1_ebs11i/cdump

#Add New entries to Clone DB pfile.
Clone DB    -- db_file_name_convert  = '/u02/VIS/visdata/','/VIS1/visdata/'
Clone DB    -- log_file_name_convert = '/u02/VIS/visdata/','/VIS1/visdata/'


Set the ORACLE_SID and ORACLE_HOME parameters

[oracle@ebs11i ~]$ export ORACLE_SID=VIS1
[oracle@ebs11i ~]$ export ORACLE_HOME=/VIS1/visdb/11.2.0.3


Connect to Instance

[oracle@ebs11i ~]$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Wed Dec 4 05:13:20 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to an idle instance.

Bring up the Instance to No Mount with Pfile

SQL> startup nomount pfile='/VIS1/visdb/11.2.0.3/dbs/initVIS1.ora';

Total System Global Area  640323584 bytes
Fixed Size                  1346728 bytes
Variable Size             461374296 bytes
Database Buffers          163577856 bytes
Redo Buffers               14024704 bytes
SQL>
SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


Connect to RMAN Auxiliary instance
[oracle@ebs11i ~]$ rman auxiliary /

Recovery Manager: Release 11.2.0.3.0 - Production on Wed Dec 4 05:14:41 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

connected to auxiliary database: VIS1 (not mounted)


Issue Duplicate Database command to Clone the Database

RMAN> duplicate database to VIS1 backup location '/u03/Backup_new/' NOFILENAMECHECK;

Starting Duplicate Db at 04-DEC-13

contents of Memory Script:
{
   sql clone "create spfile from memory";
}
executing Memory Script

sql statement: create spfile from memory

contents of Memory Script:
{
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     640323584 bytes

Fixed Size                     1346728 bytes
Variable Size                461374296 bytes
Database Buffers             163577856 bytes
Redo Buffers                  14024704 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''VIS'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   sql clone "alter system set  db_unique_name =
 ''VIS1'' comment=
 ''Modified by RMAN duplicate'' scope=spfile";
   shutdown clone immediate;
   startup clone force nomount
   restore clone primary controlfile from  '/u03/Backup_new/06oqiltp_1_1.bkp';
   alter clone database mount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''VIS'' comment= ''Modified by RMAN duplicate'' scope=spfile

sql statement: alter system set  db_unique_name =  ''VIS1'' comment= ''Modified by RMAN duplicate'' scope=spfile

Oracle instance shut down

Oracle instance started

Total System Global Area     640323584 bytes

Fixed Size                     1346728 bytes
Variable Size                461374296 bytes
Database Buffers             163577856 bytes
Redo Buffers                  14024704 bytes

Starting restore at 04-DEC-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=301 device type=DISK

channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
output file name=/VIS1/visdata/cntrl01.dbf
output file name=/VIS1/visdata/cntrl02.dbf
output file name=/VIS1/visdata/cntrl03.dbf
Finished restore at 04-DEC-13

database mounted
released channel: ORA_AUX_DISK_1
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=301 device type=DISK

contents of Memory Script:
{
   set until scn  8173891933524;
   set newname for datafile  1 to
 "/VIS1/visdata/sys1.dbf";
   set newname for datafile  2 to
 "/VIS1/visdata/sys2.dbf";
   set newname for datafile  3 to
 "/VIS1/visdata/sys3.dbf";
   set newname for datafile  4 to
 "/VIS1/visdata/sys4.dbf";
   set newname for datafile  5 to
 "/VIS1/visdata/sys5.dbf";
   set newname for datafile  6 to
 "/VIS1/visdata/sys6.dbf";
   set newname for datafile  7 to
 "/VIS1/visdata/sys7.dbf";
   set newname for datafile  8 to
 "/VIS1/visdata/undo01.dbf";
   set newname for datafile  9 to
 "/VIS1/visdata/undo02.dbf";
   set newname for datafile  10 to
 "/VIS1/visdata/undo03.dbf";
   set newname for datafile  11 to
 "/VIS1/visdata/undo04.dbf";
   set newname for datafile  12 to
 "/VIS1/visdata/archive1.dbf";
   set newname for datafile  13 to
 "/VIS1/visdata/archive2.dbf";
   set newname for datafile  14 to
 "/VIS1/visdata/media1.dbf";
   set newname for datafile  15 to
 "/VIS1/visdata/media2.dbf";
   set newname for datafile  16 to
 "/VIS1/visdata/media3.dbf";
   set newname for datafile  17 to
 "/VIS1/visdata/nologging1.dbf";
   set newname for datafile  18 to
 "/VIS1/visdata/queues1.dbf";
   set newname for datafile  19 to
 "/VIS1/visdata/queues2.dbf";
   set newname for datafile  20 to
 "/VIS1/visdata/reference1.dbf";
   set newname for datafile  21 to
 "/VIS1/visdata/reference2.dbf";
   set newname for datafile  22 to
 "/VIS1/visdata/summary1.dbf";
   set newname for datafile  23 to
 "/VIS1/visdata/summary2.dbf";
   set newname for datafile  24 to
 "/VIS1/visdata/summary3.dbf";
   set newname for datafile  25 to
 "/VIS1/visdata/summary4.dbf";
   set newname for datafile  26 to
 "/VIS1/visdata/summary5.dbf";
   set newname for datafile  27 to
 "/VIS1/visdata/tx_data1.dbf";
   set newname for datafile  28 to
 "/VIS1/visdata/tx_data2.dbf";
   set newname for datafile  29 to
 "/VIS1/visdata/tx_data3.dbf";
   set newname for datafile  30 to
 "/VIS1/visdata/tx_data4.dbf";
   set newname for datafile  31 to
 "/VIS1/visdata/tx_data5.dbf";
   set newname for datafile  32 to
 "/VIS1/visdata/tx_data6.dbf";
   set newname for datafile  33 to
 "/VIS1/visdata/tx_data7.dbf";
   set newname for datafile  34 to
 "/VIS1/visdata/tx_data8.dbf";
   set newname for datafile  35 to
 "/VIS1/visdata/tx_data9.dbf";
   set newname for datafile  36 to
 "/VIS1/visdata/tx_data10.dbf";
   set newname for datafile  37 to
 "/VIS1/visdata/tx_data11.dbf";
   set newname for datafile  38 to
 "/VIS1/visdata/tx_idx1.dbf";
   set newname for datafile  39 to
 "/VIS1/visdata/tx_idx2.dbf";
   set newname for datafile  40 to
 "/VIS1/visdata/tx_idx3.dbf";
   set newname for datafile  41 to
 "/VIS1/visdata/tx_idx4.dbf";
   set newname for datafile  42 to
 "/VIS1/visdata/tx_idx5.dbf";
   set newname for datafile  43 to
 "/VIS1/visdata/tx_idx6.dbf";
   set newname for datafile  44 to
 "/VIS1/visdata/tx_idx7.dbf";
   set newname for datafile  45 to
 "/VIS1/visdata/tx_idx8.dbf";
   set newname for datafile  46 to
 "/VIS1/visdata/tx_idx9.dbf";
   set newname for datafile  47 to
 "/VIS1/visdata/tx_idx10.dbf";
   set newname for datafile  48 to
 "/VIS1/visdata/tx_idx11.dbf";
   set newname for datafile  49 to
 "/VIS1/visdata/apps_ts_tx_interface.dbf";
   set newname for datafile  50 to
 "/VIS1/visdata/ctx1.dbf";
   set newname for datafile  51 to
 "/VIS1/visdata/sysaux01.dbf";
   set newname for datafile  52 to
 "/VIS1/visdata/aadev.dbf";
   set newname for datafile  53 to
 "/VIS1/visdata/odm.dbf";
   set newname for datafile  55 to
 "/VIS1/visdata/olap.dbf";
   set newname for datafile  56 to
 "/VIS1/visdata/owa1.dbf";
   set newname for datafile  57 to
 "/VIS1/visdata/portal.dbf";
   set newname for datafile  58 to
 "/VIS1/visdata/mobile01.dbf";
   restore
   clone database
   ;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 04-DEC-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /VIS1/visdata/sys1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00002 to /VIS1/visdata/sys2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00005 to /VIS1/visdata/sys5.dbf
channel ORA_AUX_DISK_1: restoring datafile 00007 to /VIS1/visdata/sys7.dbf
channel ORA_AUX_DISK_1: restoring datafile 00010 to /VIS1/visdata/undo03.dbf
channel ORA_AUX_DISK_1: restoring datafile 00014 to /VIS1/visdata/media1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00018 to /VIS1/visdata/queues1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00023 to /VIS1/visdata/summary2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00027 to /VIS1/visdata/tx_data1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00033 to /VIS1/visdata/tx_data7.dbf
channel ORA_AUX_DISK_1: restoring datafile 00036 to /VIS1/visdata/tx_data10.dbf
channel ORA_AUX_DISK_1: restoring datafile 00038 to /VIS1/visdata/tx_idx1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00039 to /VIS1/visdata/tx_idx2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00042 to /VIS1/visdata/tx_idx5.dbf
channel ORA_AUX_DISK_1: restoring datafile 00044 to /VIS1/visdata/tx_idx7.dbf
channel ORA_AUX_DISK_1: restoring datafile 00045 to /VIS1/visdata/tx_idx8.dbf
channel ORA_AUX_DISK_1: restoring datafile 00051 to /VIS1/visdata/sysaux01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00056 to /VIS1/visdata/owa1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00057 to /VIS1/visdata/portal.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u03/Backup_new/06oqiltp_1_1.bkp

channel ORA_AUX_DISK_1: piece handle=/u03/Backup_new/06oqiltp_1_1.bkp tag=TAG20131203T072024
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 01:10:19
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00003 to /VIS1/visdata/sys3.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /VIS1/visdata/sys6.dbf
channel ORA_AUX_DISK_1: restoring datafile 00008 to /VIS1/visdata/undo01.dbf
channel ORA_AUX_DISK_1: restoring datafile 00011 to /VIS1/visdata/undo04.dbf
channel ORA_AUX_DISK_1: restoring datafile 00015 to /VIS1/visdata/media2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00017 to /VIS1/visdata/nologging1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00019 to /VIS1/visdata/queues2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00020 to /VIS1/visdata/reference1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00026 to /VIS1/visdata/summary5.dbf
channel ORA_AUX_DISK_1: restoring datafile 00028 to /VIS1/visdata/tx_data2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00030 to /VIS1/visdata/tx_data4.dbf
channel ORA_AUX_DISK_1: restoring datafile 00032 to /VIS1/visdata/tx_data6.dbf
channel ORA_AUX_DISK_1: restoring datafile 00034 to /VIS1/visdata/tx_data8.dbf
channel ORA_AUX_DISK_1: restoring datafile 00037 to /VIS1/visdata/tx_data11.dbf
channel ORA_AUX_DISK_1: restoring datafile 00040 to /VIS1/visdata/tx_idx3.dbf
channel ORA_AUX_DISK_1: restoring datafile 00043 to /VIS1/visdata/tx_idx6.dbf
channel ORA_AUX_DISK_1: restoring datafile 00047 to /VIS1/visdata/tx_idx10.dbf
channel ORA_AUX_DISK_1: restoring datafile 00052 to /VIS1/visdata/aadev.dbf
channel ORA_AUX_DISK_1: restoring datafile 00055 to /VIS1/visdata/olap.dbf
channel ORA_AUX_DISK_1: restoring datafile 00058 to /VIS1/visdata/mobile01.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u03/Backup_new/08oqiltp_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/u03/Backup_new/08oqiltp_1_1.bkp tag=TAG20131203T072024
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 01:18:37
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00004 to /VIS1/visdata/sys4.dbf
channel ORA_AUX_DISK_1: restoring datafile 00009 to /VIS1/visdata/undo02.dbf
channel ORA_AUX_DISK_1: restoring datafile 00012 to /VIS1/visdata/archive1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00013 to /VIS1/visdata/archive2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00016 to /VIS1/visdata/media3.dbf
channel ORA_AUX_DISK_1: restoring datafile 00021 to /VIS1/visdata/reference2.dbf
channel ORA_AUX_DISK_1: restoring datafile 00022 to /VIS1/visdata/summary1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00024 to /VIS1/visdata/summary3.dbf
channel ORA_AUX_DISK_1: restoring datafile 00025 to /VIS1/visdata/summary4.dbf
channel ORA_AUX_DISK_1: restoring datafile 00029 to /VIS1/visdata/tx_data3.dbf
channel ORA_AUX_DISK_1: restoring datafile 00031 to /VIS1/visdata/tx_data5.dbf
channel ORA_AUX_DISK_1: restoring datafile 00035 to /VIS1/visdata/tx_data9.dbf
channel ORA_AUX_DISK_1: restoring datafile 00041 to /VIS1/visdata/tx_idx4.dbf
channel ORA_AUX_DISK_1: restoring datafile 00046 to /VIS1/visdata/tx_idx9.dbf
channel ORA_AUX_DISK_1: restoring datafile 00048 to /VIS1/visdata/tx_idx11.dbf
channel ORA_AUX_DISK_1: restoring datafile 00049 to /VIS1/visdata/apps_ts_tx_interface.dbf
channel ORA_AUX_DISK_1: restoring datafile 00050 to /VIS1/visdata/ctx1.dbf
channel ORA_AUX_DISK_1: restoring datafile 00053 to /VIS1/visdata/odm.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u03/Backup_new/07oqiltp_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/u03/Backup_new/07oqiltp_1_1.bkp tag=TAG20131203T072024
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 01:28:19
Finished restore at 04-DEC-13

contents of Memory Script:
{
   switch clone datafile all;
}
executing Memory Script

datafile 1 switched to datafile copy
input datafile copy RECID=58 STAMP=833274832 file name=/VIS1/visdata/sys1.dbf
datafile 2 switched to datafile copy
input datafile copy RECID=59 STAMP=833274832 file name=/VIS1/visdata/sys2.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=60 STAMP=833274832 file name=/VIS1/visdata/sys3.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=61 STAMP=833274833 file name=/VIS1/visdata/sys4.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=62 STAMP=833274833 file name=/VIS1/visdata/sys5.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=63 STAMP=833274833 file name=/VIS1/visdata/sys6.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=64 STAMP=833274833 file name=/VIS1/visdata/sys7.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=65 STAMP=833274833 file name=/VIS1/visdata/undo01.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=66 STAMP=833274833 file name=/VIS1/visdata/undo02.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=67 STAMP=833274833 file name=/VIS1/visdata/undo03.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=68 STAMP=833274833 file name=/VIS1/visdata/undo04.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=69 STAMP=833274833 file name=/VIS1/visdata/archive1.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=70 STAMP=833274833 file name=/VIS1/visdata/archive2.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=71 STAMP=833274833 file name=/VIS1/visdata/media1.dbf
datafile 15 switched to datafile copy
input datafile copy RECID=72 STAMP=833274833 file name=/VIS1/visdata/media2.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=73 STAMP=833274833 file name=/VIS1/visdata/media3.dbf
datafile 17 switched to datafile copy
input datafile copy RECID=74 STAMP=833274833 file name=/VIS1/visdata/nologging1.dbf
datafile 18 switched to datafile copy
input datafile copy RECID=75 STAMP=833274833 file name=/VIS1/visdata/queues1.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=76 STAMP=833274833 file name=/VIS1/visdata/queues2.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=77 STAMP=833274833 file name=/VIS1/visdata/reference1.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=78 STAMP=833274833 file name=/VIS1/visdata/reference2.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=79 STAMP=833274833 file name=/VIS1/visdata/summary1.dbf
datafile 23 switched to datafile copy
input datafile copy RECID=80 STAMP=833274833 file name=/VIS1/visdata/summary2.dbf
datafile 24 switched to datafile copy
input datafile copy RECID=81 STAMP=833274833 file name=/VIS1/visdata/summary3.dbf
datafile 25 switched to datafile copy
input datafile copy RECID=82 STAMP=833274834 file name=/VIS1/visdata/summary4.dbf
datafile 26 switched to datafile copy
input datafile copy RECID=83 STAMP=833274834 file name=/VIS1/visdata/summary5.dbf
datafile 27 switched to datafile copy
input datafile copy RECID=84 STAMP=833274834 file name=/VIS1/visdata/tx_data1.dbf
datafile 28 switched to datafile copy
input datafile copy RECID=85 STAMP=833274834 file name=/VIS1/visdata/tx_data2.dbf
datafile 29 switched to datafile copy
input datafile copy RECID=86 STAMP=833274834 file name=/VIS1/visdata/tx_data3.dbf
datafile 30 switched to datafile copy
input datafile copy RECID=87 STAMP=833274834 file name=/VIS1/visdata/tx_data4.dbf
datafile 31 switched to datafile copy
input datafile copy RECID=88 STAMP=833274834 file name=/VIS1/visdata/tx_data5.dbf
datafile 32 switched to datafile copy
input datafile copy RECID=89 STAMP=833274834 file name=/VIS1/visdata/tx_data6.dbf
datafile 33 switched to datafile copy
input datafile copy RECID=90 STAMP=833274834 file name=/VIS1/visdata/tx_data7.dbf
datafile 34 switched to datafile copy
input datafile copy RECID=91 STAMP=833274834 file name=/VIS1/visdata/tx_data8.dbf
datafile 35 switched to datafile copy
input datafile copy RECID=92 STAMP=833274834 file name=/VIS1/visdata/tx_data9.dbf
datafile 36 switched to datafile copy
input datafile copy RECID=93 STAMP=833274834 file name=/VIS1/visdata/tx_data10.dbf
datafile 37 switched to datafile copy
input datafile copy RECID=94 STAMP=833274834 file name=/VIS1/visdata/tx_data11.dbf
datafile 38 switched to datafile copy
input datafile copy RECID=95 STAMP=833274834 file name=/VIS1/visdata/tx_idx1.dbf
datafile 39 switched to datafile copy
input datafile copy RECID=96 STAMP=833274834 file name=/VIS1/visdata/tx_idx2.dbf
datafile 40 switched to datafile copy
input datafile copy RECID=97 STAMP=833274834 file name=/VIS1/visdata/tx_idx3.dbf
datafile 41 switched to datafile copy
input datafile copy RECID=98 STAMP=833274834 file name=/VIS1/visdata/tx_idx4.dbf
datafile 42 switched to datafile copy
input datafile copy RECID=99 STAMP=833274834 file name=/VIS1/visdata/tx_idx5.dbf
datafile 43 switched to datafile copy
input datafile copy RECID=100 STAMP=833274834 file name=/VIS1/visdata/tx_idx6.dbf
datafile 44 switched to datafile copy
input datafile copy RECID=101 STAMP=833274834 file name=/VIS1/visdata/tx_idx7.dbf
datafile 45 switched to datafile copy
input datafile copy RECID=102 STAMP=833274834 file name=/VIS1/visdata/tx_idx8.dbf
datafile 46 switched to datafile copy
input datafile copy RECID=103 STAMP=833274835 file name=/VIS1/visdata/tx_idx9.dbf
datafile 47 switched to datafile copy
input datafile copy RECID=104 STAMP=833274835 file name=/VIS1/visdata/tx_idx10.dbf
datafile 48 switched to datafile copy
input datafile copy RECID=105 STAMP=833274835 file name=/VIS1/visdata/tx_idx11.dbf
datafile 49 switched to datafile copy
input datafile copy RECID=106 STAMP=833274835 file name=/VIS1/visdata/apps_ts_tx_interface.dbf
datafile 50 switched to datafile copy
input datafile copy RECID=107 STAMP=833274835 file name=/VIS1/visdata/ctx1.dbf
datafile 51 switched to datafile copy
input datafile copy RECID=108 STAMP=833274835 file name=/VIS1/visdata/sysaux01.dbf
datafile 52 switched to datafile copy
input datafile copy RECID=109 STAMP=833274835 file name=/VIS1/visdata/aadev.dbf
datafile 53 switched to datafile copy
input datafile copy RECID=110 STAMP=833274835 file name=/VIS1/visdata/odm.dbf
datafile 55 switched to datafile copy
input datafile copy RECID=111 STAMP=833274835 file name=/VIS1/visdata/olap.dbf
datafile 56 switched to datafile copy
input datafile copy RECID=112 STAMP=833274835 file name=/VIS1/visdata/owa1.dbf
datafile 57 switched to datafile copy
input datafile copy RECID=113 STAMP=833274835 file name=/VIS1/visdata/portal.dbf
datafile 58 switched to datafile copy
input datafile copy RECID=114 STAMP=833274835 file name=/VIS1/visdata/mobile01.dbf

contents of Memory Script:
{
   set until scn  8173891933524;
   recover
   clone database
    delete archivelog
   ;
}
executing Memory Script

executing command: SET until clause

Starting recover at 04-DEC-13
using channel ORA_AUX_DISK_1

starting media recovery

channel ORA_AUX_DISK_1: starting archived log restore to default destination
channel ORA_AUX_DISK_1: restoring archived log
archived log thread=1 sequence=949
channel ORA_AUX_DISK_1: reading from backup piece /u03/Backup_new/09oqir1k_1_1.bkp
channel ORA_AUX_DISK_1: piece handle=/u03/Backup_new/09oqir1k_1_1.bkp tag=TAG20131203T084748
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
archived log file name=/VIS1/visdb/11.2.0.3/dbs/arch1_949_831836307.dbf thread=1 sequence=949
channel clone_default: deleting archived log(s)
archived log file name=/VIS1/visdb/11.2.0.3/dbs/arch1_949_831836307.dbf RECID=1 STAMP=833274841
media recovery complete, elapsed time: 00:00:06
Finished recover at 04-DEC-13
Oracle instance started

Total System Global Area     640323584 bytes

Fixed Size                     1346728 bytes
Variable Size                461374296 bytes
Database Buffers             163577856 bytes
Redo Buffers                  14024704 bytes

contents of Memory Script:
{
   sql clone "alter system set  db_name =
 ''VIS1'' comment=
 ''Reset to original value by RMAN'' scope=spfile";
   sql clone "alter system reset  db_unique_name scope=spfile";
   shutdown clone immediate;
   startup clone nomount;
}
executing Memory Script

sql statement: alter system set  db_name =  ''VIS1'' comment= ''Reset to original value by RMAN'' scope=spfile

sql statement: alter system reset  db_unique_name scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area     640323584 bytes

Fixed Size                     1346728 bytes
Variable Size                461374296 bytes
Database Buffers             163577856 bytes
Redo Buffers                  14024704 bytes
sql statement: CREATE CONTROLFILE REUSE SET DATABASE "VIS1" RESETLOGS ARCHIVELOG
  MAXLOGFILES     32
  MAXLOGMEMBERS      5
  MAXDATAFILES      512
  MAXINSTANCES     8
  MAXLOGHISTORY     3630
 LOGFILE
  GROUP   1 ( '/VIS1/visdata/log3.dbf' ) SIZE 50 M  REUSE,
  GROUP   2 ( '/VIS1/visdata/log2.dbf' ) SIZE 50 M  REUSE,
  GROUP   3 ( '/VIS1/visdata/log1.dbf' ) SIZE 50 M  REUSE
 DATAFILE
  '/VIS1/visdata/sys1.dbf'
 CHARACTER SET UTF8


contents of Memory Script:
{
   set newname for tempfile  1 to
 "/VIS1/visdata/tmp1.dbf";
   switch clone tempfile all;
   catalog clone datafilecopy  "/VIS1/visdata/sys2.dbf",
 "/VIS1/visdata/sys3.dbf",
 "/VIS1/visdata/sys4.dbf",
 "/VIS1/visdata/sys5.dbf",
 "/VIS1/visdata/sys6.dbf",
 "/VIS1/visdata/sys7.dbf",
 "/VIS1/visdata/undo01.dbf",
 "/VIS1/visdata/undo02.dbf",
 "/VIS1/visdata/undo03.dbf",
 "/VIS1/visdata/undo04.dbf",
 "/VIS1/visdata/archive1.dbf",
 "/VIS1/visdata/archive2.dbf",
 "/VIS1/visdata/media1.dbf",
 "/VIS1/visdata/media2.dbf",
 "/VIS1/visdata/media3.dbf",
 "/VIS1/visdata/nologging1.dbf",
 "/VIS1/visdata/queues1.dbf",
 "/VIS1/visdata/queues2.dbf",
 "/VIS1/visdata/reference1.dbf",
 "/VIS1/visdata/reference2.dbf",
 "/VIS1/visdata/summary1.dbf",
 "/VIS1/visdata/summary2.dbf",
 "/VIS1/visdata/summary3.dbf",
 "/VIS1/visdata/summary4.dbf",
 "/VIS1/visdata/summary5.dbf",
 "/VIS1/visdata/tx_data1.dbf",
 "/VIS1/visdata/tx_data2.dbf",
 "/VIS1/visdata/tx_data3.dbf",
 "/VIS1/visdata/tx_data4.dbf",
 "/VIS1/visdata/tx_data5.dbf",
 "/VIS1/visdata/tx_data6.dbf",
 "/VIS1/visdata/tx_data7.dbf",
 "/VIS1/visdata/tx_data8.dbf",
 "/VIS1/visdata/tx_data9.dbf",
 "/VIS1/visdata/tx_data10.dbf",
 "/VIS1/visdata/tx_data11.dbf",
 "/VIS1/visdata/tx_idx1.dbf",
 "/VIS1/visdata/tx_idx2.dbf",
 "/VIS1/visdata/tx_idx3.dbf",
 "/VIS1/visdata/tx_idx4.dbf",
 "/VIS1/visdata/tx_idx5.dbf",
 "/VIS1/visdata/tx_idx6.dbf",
 "/VIS1/visdata/tx_idx7.dbf",
 "/VIS1/visdata/tx_idx8.dbf",
 "/VIS1/visdata/tx_idx9.dbf",
 "/VIS1/visdata/tx_idx10.dbf",
 "/VIS1/visdata/tx_idx11.dbf",
 "/VIS1/visdata/apps_ts_tx_interface.dbf",
 "/VIS1/visdata/ctx1.dbf",
 "/VIS1/visdata/sysaux01.dbf",
 "/VIS1/visdata/aadev.dbf",
 "/VIS1/visdata/odm.dbf",
 "/VIS1/visdata/olap.dbf",
 "/VIS1/visdata/owa1.dbf",
 "/VIS1/visdata/portal.dbf",
 "/VIS1/visdata/mobile01.dbf";
   switch clone datafile all;
}
executing Memory Script

executing command: SET NEWNAME

renamed tempfile 1 to /VIS1/visdata/tmp1.dbf in control file

cataloged datafile copy
datafile copy file name=/VIS1/visdata/sys2.dbf RECID=1 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/sys3.dbf RECID=2 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/sys4.dbf RECID=3 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/sys5.dbf RECID=4 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/sys6.dbf RECID=5 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/sys7.dbf RECID=6 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/undo01.dbf RECID=7 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/undo02.dbf RECID=8 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/undo03.dbf RECID=9 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/undo04.dbf RECID=10 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/archive1.dbf RECID=11 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/archive2.dbf RECID=12 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/media1.dbf RECID=13 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/media2.dbf RECID=14 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/media3.dbf RECID=15 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/nologging1.dbf RECID=16 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/queues1.dbf RECID=17 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/queues2.dbf RECID=18 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/reference1.dbf RECID=19 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/reference2.dbf RECID=20 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/summary1.dbf RECID=21 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/summary2.dbf RECID=22 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/summary3.dbf RECID=23 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/summary4.dbf RECID=24 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/summary5.dbf RECID=25 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data1.dbf RECID=26 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data2.dbf RECID=27 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data3.dbf RECID=28 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data4.dbf RECID=29 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data5.dbf RECID=30 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data6.dbf RECID=31 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data7.dbf RECID=32 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data8.dbf RECID=33 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data9.dbf RECID=34 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data10.dbf RECID=35 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_data11.dbf RECID=36 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx1.dbf RECID=37 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx2.dbf RECID=38 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx3.dbf RECID=39 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx4.dbf RECID=40 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx5.dbf RECID=41 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx6.dbf RECID=42 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx7.dbf RECID=43 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx8.dbf RECID=44 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx9.dbf RECID=45 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx10.dbf RECID=46 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/tx_idx11.dbf RECID=47 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/apps_ts_tx_interface.dbf RECID=48 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/ctx1.dbf RECID=49 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/sysaux01.dbf RECID=50 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/aadev.dbf RECID=51 STAMP=833274869
cataloged datafile copy
datafile copy file name=/VIS1/visdata/odm.dbf RECID=52 STAMP=833274870
cataloged datafile copy
datafile copy file name=/VIS1/visdata/olap.dbf RECID=53 STAMP=833274870
cataloged datafile copy
datafile copy file name=/VIS1/visdata/owa1.dbf RECID=54 STAMP=833274870
cataloged datafile copy
datafile copy file name=/VIS1/visdata/portal.dbf RECID=55 STAMP=833274870
cataloged datafile copy
datafile copy file name=/VIS1/visdata/mobile01.dbf RECID=56 STAMP=833274870
datafile 2 switched to datafile copy
input datafile copy RECID=1 STAMP=833274869 file name=/VIS1/visdata/sys2.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=2 STAMP=833274869 file name=/VIS1/visdata/sys3.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=3 STAMP=833274869 file name=/VIS1/visdata/sys4.dbf
datafile 5 switched to datafile copy
input datafile copy RECID=4 STAMP=833274869 file name=/VIS1/visdata/sys5.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=5 STAMP=833274869 file name=/VIS1/visdata/sys6.dbf
datafile 7 switched to datafile copy
input datafile copy RECID=6 STAMP=833274869 file name=/VIS1/visdata/sys7.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=833274869 file name=/VIS1/visdata/undo01.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=8 STAMP=833274869 file name=/VIS1/visdata/undo02.dbf
datafile 10 switched to datafile copy
input datafile copy RECID=9 STAMP=833274869 file name=/VIS1/visdata/undo03.dbf
datafile 11 switched to datafile copy
input datafile copy RECID=10 STAMP=833274869 file name=/VIS1/visdata/undo04.dbf
datafile 12 switched to datafile copy
input datafile copy RECID=11 STAMP=833274869 file name=/VIS1/visdata/archive1.dbf
datafile 13 switched to datafile copy
input datafile copy RECID=12 STAMP=833274869 file name=/VIS1/visdata/archive2.dbf
datafile 14 switched to datafile copy
input datafile copy RECID=13 STAMP=833274869 file name=/VIS1/visdata/media1.dbf
datafile 15 switched to datafile copy
input datafile copy RECID=14 STAMP=833274869 file name=/VIS1/visdata/media2.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=15 STAMP=833274869 file name=/VIS1/visdata/media3.dbf
datafile 17 switched to datafile copy
input datafile copy RECID=16 STAMP=833274869 file name=/VIS1/visdata/nologging1.dbf
datafile 18 switched to datafile copy
input datafile copy RECID=17 STAMP=833274869 file name=/VIS1/visdata/queues1.dbf
datafile 19 switched to datafile copy
input datafile copy RECID=18 STAMP=833274869 file name=/VIS1/visdata/queues2.dbf
datafile 20 switched to datafile copy
input datafile copy RECID=19 STAMP=833274869 file name=/VIS1/visdata/reference1.dbf
datafile 21 switched to datafile copy
input datafile copy RECID=20 STAMP=833274869 file name=/VIS1/visdata/reference2.dbf
datafile 22 switched to datafile copy
input datafile copy RECID=21 STAMP=833274869 file name=/VIS1/visdata/summary1.dbf
datafile 23 switched to datafile copy
input datafile copy RECID=22 STAMP=833274869 file name=/VIS1/visdata/summary2.dbf
datafile 24 switched to datafile copy
input datafile copy RECID=23 STAMP=833274869 file name=/VIS1/visdata/summary3.dbf
datafile 25 switched to datafile copy
input datafile copy RECID=24 STAMP=833274869 file name=/VIS1/visdata/summary4.dbf
datafile 26 switched to datafile copy
input datafile copy RECID=25 STAMP=833274869 file name=/VIS1/visdata/summary5.dbf
datafile 27 switched to datafile copy
input datafile copy RECID=26 STAMP=833274869 file name=/VIS1/visdata/tx_data1.dbf
datafile 28 switched to datafile copy
input datafile copy RECID=27 STAMP=833274869 file name=/VIS1/visdata/tx_data2.dbf
datafile 29 switched to datafile copy
input datafile copy RECID=28 STAMP=833274869 file name=/VIS1/visdata/tx_data3.dbf
datafile 30 switched to datafile copy
input datafile copy RECID=29 STAMP=833274869 file name=/VIS1/visdata/tx_data4.dbf
datafile 31 switched to datafile copy
input datafile copy RECID=30 STAMP=833274869 file name=/VIS1/visdata/tx_data5.dbf
datafile 32 switched to datafile copy
input datafile copy RECID=31 STAMP=833274869 file name=/VIS1/visdata/tx_data6.dbf
datafile 33 switched to datafile copy
input datafile copy RECID=32 STAMP=833274869 file name=/VIS1/visdata/tx_data7.dbf
datafile 34 switched to datafile copy
input datafile copy RECID=33 STAMP=833274869 file name=/VIS1/visdata/tx_data8.dbf
datafile 35 switched to datafile copy
input datafile copy RECID=34 STAMP=833274869 file name=/VIS1/visdata/tx_data9.dbf
datafile 36 switched to datafile copy
input datafile copy RECID=35 STAMP=833274869 file name=/VIS1/visdata/tx_data10.dbf
datafile 37 switched to datafile copy
input datafile copy RECID=36 STAMP=833274869 file name=/VIS1/visdata/tx_data11.dbf
datafile 38 switched to datafile copy
input datafile copy RECID=37 STAMP=833274869 file name=/VIS1/visdata/tx_idx1.dbf
datafile 39 switched to datafile copy
input datafile copy RECID=38 STAMP=833274869 file name=/VIS1/visdata/tx_idx2.dbf
datafile 40 switched to datafile copy
input datafile copy RECID=39 STAMP=833274869 file name=/VIS1/visdata/tx_idx3.dbf
datafile 41 switched to datafile copy
input datafile copy RECID=40 STAMP=833274869 file name=/VIS1/visdata/tx_idx4.dbf
datafile 42 switched to datafile copy
input datafile copy RECID=41 STAMP=833274869 file name=/VIS1/visdata/tx_idx5.dbf
datafile 43 switched to datafile copy
input datafile copy RECID=42 STAMP=833274869 file name=/VIS1/visdata/tx_idx6.dbf
datafile 44 switched to datafile copy
input datafile copy RECID=43 STAMP=833274869 file name=/VIS1/visdata/tx_idx7.dbf
datafile 45 switched to datafile copy
input datafile copy RECID=44 STAMP=833274869 file name=/VIS1/visdata/tx_idx8.dbf
datafile 46 switched to datafile copy
input datafile copy RECID=45 STAMP=833274869 file name=/VIS1/visdata/tx_idx9.dbf
datafile 47 switched to datafile copy
input datafile copy RECID=46 STAMP=833274869 file name=/VIS1/visdata/tx_idx10.dbf
datafile 48 switched to datafile copy
input datafile copy RECID=47 STAMP=833274869 file name=/VIS1/visdata/tx_idx11.dbf
datafile 49 switched to datafile copy
input datafile copy RECID=48 STAMP=833274869 file name=/VIS1/visdata/apps_ts_tx_interface.dbf
datafile 50 switched to datafile copy
input datafile copy RECID=49 STAMP=833274869 file name=/VIS1/visdata/ctx1.dbf
datafile 51 switched to datafile copy
input datafile copy RECID=50 STAMP=833274869 file name=/VIS1/visdata/sysaux01.dbf
datafile 52 switched to datafile copy
input datafile copy RECID=51 STAMP=833274869 file name=/VIS1/visdata/aadev.dbf
datafile 53 switched to datafile copy
input datafile copy RECID=52 STAMP=833274870 file name=/VIS1/visdata/odm.dbf
datafile 55 switched to datafile copy
input datafile copy RECID=53 STAMP=833274870 file name=/VIS1/visdata/olap.dbf
datafile 56 switched to datafile copy
input datafile copy RECID=54 STAMP=833274870 file name=/VIS1/visdata/owa1.dbf
datafile 57 switched to datafile copy
input datafile copy RECID=55 STAMP=833274870 file name=/VIS1/visdata/portal.dbf
datafile 58 switched to datafile copy
input datafile copy RECID=56 STAMP=833274870 file name=/VIS1/visdata/mobile01.dbf

contents of Memory Script:
{
   Alter clone database open resetlogs;
}
executing Memory Script

database opened
Finished Duplicate Db at 04-DEC-13

RMAN>
RMAN>


Your clone Instance is ready.


SQL> select INSTANCE_NAME, STATUS from V$INSTANCE;

INSTANCE_NAME   STATUS 
-------------  ------ 
VIS1            OPEN  


What Happens During DB Creation.

$
0
0


Starting Oracle Instance by exporting  ORACLE_HOME and INITIAL parameters

Starting Sequence of Back Ground Process


PMON background process Started
PSP0 background process started
VKTM background process started
GEN0 background process started
DIAG background process started
DBRM background process started
DIA0 background process started
MMAN background process started
DBW0 background process started
LGWR background process started
CKPT background process started
SMON background process started
RECO background process started
MMON background process started
MMNL background process started

Starting 1 Dispatcherfor Network Address '(ADDRESS=(PARTIAL=YES)(PROTOCOL=TCP))'
Starting 1 SharedServer

Creation of Control File


Starting ASMB background process
Starting RBAL background process
Starting MARK background process
MARK load ASM lib and mount the diskgroups

Database mounted in Exclusive mode

Lost write protection disabled

Successful MOUNT


onlinelog1- created and open

SMON enables cache recovery

SYSTEM– created and set to default
SYSAUX - created and set to default
UNDO - created and set to default
TEMP - created and set to default

SMON enabling Txion Recovery
SMCO started
QMNC started

Create Database completed


USERS– Created and set to default

onlinelog2- created and open
onlinelog3- created and open


Set db_securefile = ‘PERMITTED’  à LOBs are allowed to be created as SecureFiles.

Shutting down instance (immediate)
Stopping background process SMCO
Shutting down instance: further logons disabled
Stopping background process CJQ0
Stopping background process QMNC
Stopping background process MMNL
Stopping background process MMON

All dispatchers and shared servers shutdown

ALTER DATABASE CLOSE NORMAL

SMON: disabling tx recovery
SMON: disabling cache recovery
Shutting down archive processes
Archiving is disabled
Archive process shutdown avoided: 0 active
Deferred communication with ASM instance
Completed: ALTER DATABASE CLOSE NORMAL

ALTER DATABASE DISMOUNT

Shutting down archive processes
ARCH: Archival disabled due to shutdown: 1089
Stopping background process VKTM
Shutting down MARK background process
Instance shutdown complete

Oracle will start the instance and open the database.

SQL>

Total 7 log switches are happening during Database creation


To Check the Background process from Data Dictionary View.

select NAME, DESCRIPTION
  fromV$BGPROCESS
 wherePADDR<> '00'order by 1;

NAME  DESCRIPTION
----- -----------------------------------
ASMB  ASM Background
CJQ0  Job Queue Coordinator
CKPT  checkpoint
DBRM  DataBase Resource Manager
DBW0  db writer process 0
DIA0  diagnosibility process 0
DIAG  diagnosibility process
GEN0  generic0
LGWR  Redo etc.
MARK  mark AU for resync coordinator
MMAN  Memory Manager
MMNL  Manageability Monitor Process 2
MMON  Manageability Monitor Process
PMON  process cleanup
PSP0  process spawner 0
QMNC  AQ Coordinator
RBAL  ASM Rebalance master
RECO  distributed recovery
SMCO  Space Manager Process
SMON  System Monitor Process
VKTM  Virtual Keeper of TiMe process

GoldenGate parameter "SCHEMATRANDATA"

$
0
0
Use ADD SCHEMATRANDATA to enable schema-level supplemental logging for Oracle tables. ADD SCHEMATRANDATA acts on all of the current and future tables in a given schema to automatically log a superset of available keys that Oracle GoldenGate needs for row identification.

ADD SCHEMATRANDATA does the following:

● Enables Oracle supplemental logging for new tables created with a CREATE TABLE.
● Updates supplemental logging for tables affected by an ALTER TABLE to add or drop columns.
● Updates supplemental logging for tables that are renamed.
● Updates supplemental logging for tables for which unique or primary keys are added or dropped.
ADD SCHEMATRANDATA logs the key columns of a table in the following order of priority:
● Primary key
● In the absence of a primary key, all of the unique keys of the table, including those that are disabled, unusable or invisible. Unique keys that contain ADT member columns
are also logged. Only unique keys on virtual columns (function-based indexes) are not logged.
● If none of the preceding exists, all scalar columns

When to Use ADD SCHEMATRANDATA

ADD SCHEMATRANDATA must be used in the following cases:
  • For all tables that are part of an Extract group that is to be configured for integrated capture. ADD SCHEMATRANDATA ensures that the correct key is logged by logging all of the keys.
  • For all tables that will be processed in an integrated Replicat group. Options are provided that enable the logging of the primary, unique, and foreign keys to support the computation of dependencies among relational tables being processed through different apply servers.
  • When DDL replication is active and DML is concurrent with DDL that creates new tables or alters key columns. It best handles scenarios where DML can be applied to objects very shortly after DDL is issued on them. ADD SCHEMATRANDATA causes the appropriate key values to be logged in the redo log atomically with each DDL operation, thus ensuring metadata continuity for the DML when it is captured from the log, despite any lag in Extract processing.
Database-level Logging Requirements for Using ADD SCHEMATRANDATA
Oracle strongly encourages putting the source database into forced logging mode and enabling minimal supplemental logging at the database level when using Oracle GoldenGate. This adds row chaining information, if any exists, to the redo log for update operations. See Installing and Configuring Oracle GoldenGate for Oracle Databasefor more information about configuring logging to support Oracle GoldenGate.

Additional Considerations for Using ADD SCHEMATRANDATA

·         Before using ADD SCHEMATRANDATA, issue the DBLOGIN command. The user who issues the command must be granted the Oracle Streams administrator privilege.

SQL> exec dbms_streams_auth.grant_admin_privilege('user')

·         ADD SCHEMATRANDATA can be used instead of the ADD TRANDATA command when DDL replication is not enabled. Note, however, that if a table has no primary key but has multiple unique keys, ADD SCHEMATRANDATA causes the database to log all of the unique keys. In such cases, ADD SCHEMATRANDATA causes the database to log more redo data than does ADD TRANDATA. To avoid the extra logging, designate one of the unique keys as a primary key, if possible.

·         For tables with a primary key, with a single unique key, or without a key, ADD SCHEMATRANDATA adds no additional logging overhead, as compared to ADD TRANDATA.

·         If you must log additional, non-key columns of a specific table (or tables) for use by Oracle GoldenGate, such as those needed for FILTER statements and KEYCOLS clauses in the TABLE and MAP parameters, issue an ADD TRANDATA command for those columns. That command has a COLS option to issue table-level supplemental logging for the columns, and it can be used in conjunction with ADD SCHEMATRANDATA.

Syntax

ADD SCHEMATRANDATA schema  [ALLOWNONVALIDATEDKEYS] [NOSCHEDULINGCOLS | ALLCOLS] schema

The schema for which you want the supplementary key information to be logged. Do not use a wildcard. To issue ADD SCHEMATRANDATA for schemas in more than one pluggable database of a multitenant container database, log in to each pluggable database separately with  
DBLOGIN and then issue ADD SCHEMATRANDATA.

ALLOWNONVALIDATEDKEYS

This option is valid for Oracle 11.2.0.4 and later 11g versions and Oracle 12.1.0.2 and later 12c versions. (Not available for Oracle 12.1.0.1.) It includes NON VALIDATED and NOT VALID primary keys in the supplemental logging. These keys override the normal key selection criteria that is used by Oracle GoldenGate. If the GLOBALS parameter ALLOWNONVALIDATEDKEYS is being used, ADD SCHEMATRANDATA runs with ALLOWNONVALIDATEDKEYS whether or not it is specified. By default NON VALIDATED and NOT VALID primary keys are not logged. For more information, see the GLOBALSALLOWNONVALIDATEDKEYSparameter.

NOSCHEDULINGCOLS | ALLCOLS

These options support integrated Replicat for an Oracle target database.

NOSCHEDULINGCOLS

Disables the logging of scheduling columns. By default, ADD SCHEMATRANDATA enables the unconditional logging of the primary key and the conditional supplemental logging of all unique key(s) and foreign key(s) of all current and future tables in the given schema. Unconditional logging forces the primary key values to the log whether or not the key was changed in the current operation. Conditional logging logs all of the column values of a foreign or unique key if at least one of them was changed in the current operation. The primary key, unique keys, and foreign keys must all be available to the inbound server to compute dependencies. For more information about integrated Replicat, see Installing and Configuring Oracle GoldenGate for Oracle Database.

ALLCOLS

Enables the unconditional supplemental logging of all supported key and non-key columns for all current and future tables in the given schema. This option enables the logging of the keys required to compute dependencies, plus columns that are required for filtering, conflict resolution, or other purposes.
 
Example 1   
The following enables supplemental logging for the schema scott.

ADD SCHEMATRANDATA scott

Example 2   
The following example logs all supported key and non-key columns for all current and future tables in the schema named scott.

ADD SCHEMATRANDATA scott ALLCOLS

OGG - PASSTHRU/NOPASSTHRU

$
0
0


PASSTHRU - NOPASSTHRU

You can specify the PASSTHRU parameter on the data pump if you aren’t doing any filtering or column mapping and your source and target data structures are identical. For GoldenGate to consider the tables identical, they must have the same column names, data types, sizes, and semantics and appear in the same order. Using PASSTHRU improves performance by allowing GoldenGate to bypass looking up any table definitions from the database or the data-definitions file.

The PASSTHRU and NOPASSTHRUparameters are TABLE-specific. One parameter remains in effect for all subsequent TABLEstatements, until the other parameter is encountered. This enables you to specify pass-through behavior for one set of tables while using normal processing, including data filtering and other manipulation, for other tables.

Example

The following parameter file configures pass-through mode for all data from fin.acct, but allows normal processing for fin.sales.

EXTRACT fin
USERIDALIAS tiger1
RMTHOST sysb, MGRPORT 7809, ENCRYPT AES192 KEYNAME mykey
ENCRYPTTRAIL AES192
RMTTRAIL /ggs/dirdat/rt
PASSTHRU
TABLE fin.acct;
NOPASSTHRU
TABLE fin.sales, WHERE (ACCOUNT-CODE < 100);

In PASSTHRU mode, the data pump will not perform automatic ASCII-to-EBCDIC or EBCDIC-to-ASCII conversion.

PASSTHRU in DDL Replication

DDL is propagated by a data pump in PASSTHRU mode automatically. As a result, DDL that is performed on a source table of a certain name (for example ALTER TABLE TableA...) is processed by the data pump using the same name (ALTER TABLE TableA). The name cannot be mapped by the data pump as ALTER TABLE TableB, regardless of any TABLE statements that specify otherwise.

Importance of Bounded Recovery on GoldenGate

$
0
0

The Importance of Bounded Recovery


Bounded Recovery is a component of Oracle GoldenGate’s Extract process checkpointingfacility. It guarantees an efficient recovery after Extract stops for any reason, planned or unplanned, no matter how many open (uncommitted) transactions there were at the time that Extract stopped, nor how old they were. Bounded Recovery sets an upper boundary for the maximum amount of time that it would take for Extract to recover to the point where it stopped and then resume normal processing.
Extract performs this recovery as follows:
·         If there were no open transactions when Extract stopped, the recovery begins at the current Extract read checkpoint. This is a normal recovery.
·         If there were open transactions whose start points in the log were very close in time to the time when Extract stopped, Extract begins recovery by re-reading the logs from the beginning of the oldest open transaction. This requires Extract to do redundant work for transactions that were already written to the trail or discarded before Extract stopped, but that work is an acceptable cost given the relatively small amount of data to process. This also is considered a normal recovery.
·         If there were one or more transactions that Extract qualified as long-running open transactions, Extract begins its recovery with a Bounded Recovery.

Bounded Recovery is new feature in OGG 11.1, this is how it works:
A transaction qualifies as long-running if it has been open longer than one Bounded Recovery interval, which is specified with the BRINTERVAL option of the BR parameter.
For example, if the Bounded Recovery interval is four hours, a long-running open transaction is any transaction that started more than four hours ago.
At each Bounded Recovery interval, Extract makes a Bounded Recovery checkpoint, which persists the current state and data of Extract to disk, including the state and data (if any) of long-running transactions. If Extract stops after a Bounded Recovery checkpoint, it will recover from a position within the previous Bounded Recovery interval or at the last Bounded Recovery checkpoint, instead of processing from the log position where the oldest open long-running transaction first appeared, which could be several trail files ago.

Bounded Recovery is enabled by default for Extract processes and has a 4 hour BR interval. To adjust the BR interval to say 24 hours, use the following syntax in your Extract parameter file:

BR BRINTERVAL 24, BRDIR BR

The default location for BR checkpoint files is the GoldenGate home directory. This can be altered by including a full path:

BR BRINTERVAL 24, BRDIR /ggsdata/brcheckpoint

Case Study

The Problem

In a recent case, Bounded Recovery was disabled through the following Extract parameter:
BR BROFF
Consequently the following behavior prevented the Extract process from recovering and starting.
1.       Firstly, GoldenGate had fallen behind due to a batch job and subsequently the Extract process was reading the archived redologs and not the online redologs. Also at this time an archived redologwas deleted by RMAN during a scheduled backup, that caused the Extract process to abend with OGG-00446 (caused by ORA-15173)
Error in ggserr.log
2012-07-04 11:03:03  ERROROGG-00446  Oracle GoldenGate Capture for Oracle, euktds01.prm:  Getting attributes for ASM file +FRA/2_86717_716466928.dbf, SQL <BEGIN dbms_diskgroup.getfileattr('+FRA/2_86717_716466928.dbf', :filetype, :filesize, :lblksize); END;>: (15056) ORA-15056: additional error message ORA-15173: entry '2_86717_716466928.dbf' does not exist in directory '/' ORA-06512: at "SYS.X$DBMS_DISKGROUP", line 304 ORA-06512: at line 1Not able to establish initial position for sequence 86717, rba122140688.
2012-07-04 11:03:03  ERROROGG-01668  Oracle GoldenGate Capture for Oracle, euktds01.prm:  PROCESS ABENDING.


2.       Some hours later, the deleted archived redologfile was restored and the Extract process restarted. However, despite the process running, the RBA# and Sequence# were not incrementing. The Extract process was stuck!
The INFO GGSCI command with DETAIL option revealed the source redo was not available.
GGSCI (dbserver09a) 2> info EUKMDS01, detail

Extract SourceBegin             End

Not Available2012-07-04 23:302012-07-04 23:30
Not Available2012-07-04 23:282012-07-04 23:30
Not Available2012-07-01 05:352012-07-04 23:28
+DATA/ukhub/onlinelog/group_4.282.716467031  2012-06-24 05:28  2012-07-01 05:35
+DATA/ukhub/onlinelog/group_3.280.716467027  2012-06-23 21:06  2012-06-24 05:28

3.       The ggserr.log also revealed a long running transaction detected.
2012-07-04 23:31:47  WARNING OGG-01027  Oracle GoldenGateCapture for Oracle, euko1els.prm:  Long Running Transaction: XID 197.8.3521317, Items 0, Extract EUKO1ELS, Redo Thread 2, SCN 51.3925309013 (222968641109), Redo Seq #86717, Redo RBA 122140688.


The Solution

The Extract process was stuck in recovery mode, but could not find the starting RBA. In order to get the process up and running, the following steps were executed on the source system.
1.       First of all, the Extract process was stopped with the force option.
GGSCI (dbserver09a) 4> send extract EUKMDS01, forcestop
2.       The start position of the Extract process was altered to the beginning of the long running transaction.
GGSCI (dbserver09a) 5> alter extract EUKMDS01, begin 2012-07-04 23:31:47
3.       The extract process was started.
GGSCI (dbserver09a) 4> start extract EUKMDS01
4.       Sure enough, the Extract process was reinitialized and continued to process the backlog.
GGSCI (uklpdptoy09a) 2> info EUKMDS01, detail

Extract SourceBegin             End

+DATA/ukhub/onlinelog/group_4.282.716467031  2012-07-04 23:31  2012-07-05 02:58
Not Available* Initialized *   2012-07-04 23:31
Not Available2012-07-04 23:302012-07-04 23:30


Conclusion

Never disable Bounded Recovery else Extract processes may fail to recover automatically. Furthermore, to prevent RMAN from deleting archived log files that are still required. If you register the extract with LOGRETENTION then the GoldenGatewill retain the archive logs that Extract needs for recovery.
To register Extract do the following:

1.        Stop the Extract ( Ensure that all the archive log files starting from recovery checkpoint till current checkpoint is available on all nodes )
2.       Execute the following GGSCI commands
GGSCI> dbloginuserid<username>, password <password>
GGSCI> register extract <Extract-name>, LOGRETENTION

You can confirm whether Extract is registered or not using the query “select * from dba_capture”. (This sounds like Streams!) This should have an entry for Extract.

3.       Start the Extract
GGSCI>start extract <Extract-name>

OGG-00664 OCI Error beginning session

$
0
0
GGSCI (pkdb4) 1> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTTR       09:56:11      00:00:18
EXTRACT     STOPPED     EXTTRP      00:00:00      20:36:52


GGSCI (pkdb4) 2> start exttrp

Sending START request to MANAGER ...
EXTRACT EXTTRP starting


GGSCI (pkdb4) 3> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTTR       09:56:11      00:00:23
EXTRACT     STOPPED     EXTTRP      00:00:00      20:36:57


GGSCI (pkdb4) 4> view report exttrp


***********************************************************************
                 Oracle GoldenGate Capture for Oracle
                     Version 11.1.1.0.0 Build 078
   HP/UX, IA64, 64bit (optimized), Oracle 10 on Jul 28 2010 15:49:30

Copyright (C) 1995, 2010, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2014-07-01 07:29:55
***********************************************************************

Operating System Version:
HP-UX
Version U, Release B.11.31
Node: pkdb4
Machine: ia64
                         soft limit   hard limit
Address Space Size   :    unlimited    unlimited
Heap Size            :   4294967296   4294967296
File Size            :    unlimited    unlimited
CPU Time             :    unlimited    unlimited

Process id: 1468

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************
EXTRACT exttrp
USERID ggs_dr_owner, PASSWORD ************

Source Context :
  SourceModule            : [ggdb.ora.sess]
  SourceID                : [/home/ecloud/workspace/Build_OpenSys_r11.1.1.0.0_078_[34100]/perforce/src/gglib/ggdbora/ocisess.c]
  SourceFunction          : [OCISESS_try]
  SourceLine              : [498]

2014-07-01 07:29:55  ERROR   OGG-00664  OCI Error beginning session (status = 1034-ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
HPUX-ia64 Error: 2: No such file or directory).

2014-07-01 07:29:55  ERROR   OGG-01668  PROCESS ABENDING.


Reason :

If we have multiple instances running on same host, We need to mention to which instance extract has to connect.

FIX : 

Option 1: Use SETENV ORACLE_SID and ORACLE_HOME values on param file

SETENV (ORACLE_SID=TXNTR1)
SETENV (ORACLE_HOME=/u01/app/oracle/product/10.2.0/db)
USERID ggs_owner, PASSWORD ggs_owner

Option 2: Use TNS entry (if multiple DB instances are running on same node)

USERID ggs_owner@tns_entry, PASSWORD ggs_owner

----------------------------------------------------------
GGSCI (pkdb4) 5> start exttrp

Sending START request to MANAGER ...
EXTRACT EXTTRP starting


GGSCI (pkdb4) 6> info all

Program     Status      Group       Lag           Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXTTR       09:56:11      00:00:23
EXTRACT     RUNNING     EXTTRP      20:36:22      00:00:14

Oracle GoldenGate DDL Replication

$
0
0


1.       Make sure to have a Non-Default tablespace for GoldenGate User (not users tablespace)

SQL> select username, default_tablespace from dba_users where username='GGS_DR_OWNER';

USERNAME                       DEFAULT_TABLESPACE
------------------------------ ------------------------------
GGS_DR_OWNER                   GGS_DATA

2.       Turn off Recycle bin for the Source database.

SQL> alter session set recyclebin=OFF scope=BOTH;

Session altered.


3.       You will find the below execution scripts under GoldenGate Software Home Directory.

SQL> @marker_setup

Marker setup script

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_DR_OWNER


Marker setup table script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGS_DR_OWNER

MARKER TABLE
-------------------------------
OK

MARKER SEQUENCE
-------------------------------
OK

Script complete.



SQL> @ddl_setup

GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Oracle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_DR_OWNER

You will be prompted for the mode of installation.
To install or reinstall DDL replication, enter INITIALSETUP
To upgrade DDL replication, enter NORMAL
Enter mode of installation:INITIALSETUP

Working, please wait ...
Spooling to file ddl_setup_spool.txt

Checking for sessions that are holding locks on Oracle Golden Gate metadata tables ...

Check complete.


Using GGS_DR_OWNER as a GoldenGate schema name, INITIALSETUP as a mode of installation.

Working, please wait ...

RECYCLEBIN must be empty.
This installation will purge RECYCLEBIN for all users.
To proceed, enter yes. To stop installation, enter no.

Enter yes or no:yes


DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGS_DR_OWNER

DDLORA_GETTABLESPACESIZE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

CLEAR_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

CREATE_TRACE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

TRACE_PUT_LINE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

INITIAL_SETUP STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLVERSIONSPECIFIC PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDLREPLICATION PACKAGE BODY STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL HISTORY TABLE
-----------------------------------
OK

DDL HISTORY TABLE(1)
-----------------------------------
OK

DDL DUMP TABLES
-----------------------------------
OK

DDL DUMP COLUMNS
-----------------------------------
OK

DDL DUMP LOG GROUPS
-----------------------------------
OK

DDL DUMP PARTITIONS
-----------------------------------
OK

DDL DUMP PRIMARY KEYS
-----------------------------------
OK

DDL SEQUENCE
-----------------------------------
OK

GGS_TEMP_COLS
-----------------------------------
OK

GGS_TEMP_UK
-----------------------------------
OK

DDL TRIGGER CODE STATUS:

Line/pos   Error
---------- -----------------------------------------------------------------
No errors  No errors

DDL TRIGGER INSTALL STATUS
-----------------------------------
OK

DDL TRIGGER RUNNING STATUS
-----------------------------------
ENABLED

STAYMETADATA IN TRIGGER
-----------------------------------
OFF

DDL TRIGGER SQL TRACING
-----------------------------------
0

DDL TRIGGER TRACE LEVEL
-----------------------------------
0

LOCATION OF DDL TRACE FILE
------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/admin/TXNQA/udump/ggs_ddl_trace.log

Analyzing installation status...


STATUS OF DDL REPLICATION
------------------------------------------------------------------------------------------------------------------------
SUCCESSFUL installation of DDL Replication software components

Script complete.



SQL> @role_setup

GGS Role setup script

This script will drop and recreate the role GGS_GGSUSER_ROLE
To use a different role name, quit this script and then edit the params.sql script to change the gg_role parameter to the preferred name. (Do not run the script.)

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_DR_OWNER
Wrote file role_setup_set.txt

PL/SQL procedure successfully completed.


Role setup script complete

Grant this role to each user assigned to the Extract, GGSCI, and Manager processes, by using the following SQL command:

GRANT GGS_GGSUSER_ROLE TO <loggedUser>

where <loggedUser> is the user assigned to the GoldenGate processes.


SQL> GRANT GGS_GGSUSER_ROLE TO GGS_DR_OWNER;

Grant succeeded.


SQL> @ddl_enable

Trigger altered.

SQL> @ddl_pin GGS_DR_OWNER

PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


PL/SQL procedure successfully completed.


GGSCI 1> DBLOGIN USERID GGS_DR_OWNER, PASSWORD GGS_DR_OWNER

GGSCI 2> ADD TRANDATA SUKUMAR.emp

EXTRACT ddlext
SETENV (NLS_LANG=AMERICAN_AMERICA.UTF8)
SETENV ORACLE_SID=sukumar
USERID GGS_DR_OWNER@DB_TNS, PASSWORD GGS_DR_OWNER
RMTHOST OGG01.sukumar.com, MGRPORT 7809
RMTTRAIL /u01/GG/source/dirdat/dd
TRANLOGOPTIONS ASMUSER sys@ASM_TNS, ASMPASSWORD asm123
DDL INCLUDE MAPPED                        ----------------- MandatoryParameter for DDL replication
TABLE SUKUMAR.emp;


After completion of DDL Setup, If you don’t want to replicat the DDL’s, we have an option to disable it.

SQL> @ddl_disable

Trigger altered.



Upgradation Of GoldenGate Version

$
0
0
From version     : 11.2.1.0.3
To Version          : 11.2.1.0.8
OS version          : Linux_x64

(Download the GoldenGate Patch from support.oracle.com)

Step 1: Go to Goldengate Home Directory

[oracle:sukumar:/app/oracle]$ cd /app/oracle/product/ggate

Step 2: Unzip the patch file, which populates the tar file.

 [oracle:sukumar:/app/oracle/product/ggate]$ ls -lrt *.zip
-rwxr-xr-x 1 oracle oinstall 88475139 Jul 13 06:47 p17011917_112108_Linux-x86-64.zip

[oracle:sukumar:/app/oracle/product/ggate]$ unzip p17011917_112108_Linux-x86-64.zip
Archive:  p17011917_112108_Linux-x86-64.zip
  inflating: fbo_ggs_Linux_x64_ora11g_64bit.tar
  inflating: Oracle-GoldenGate-11.2.1.0.8-README.txt
  inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.8.pdf

[oracle:sukumar:/app/oracle/product/ggate]$ ls -lrt *.tar
-rw-rw-r-- 1 oracle oinstall 227624960 Jul 18  2013 fbo_ggs_Linux_x64_ora11g_64bit.tar

Step 3 : Make sure to stop all the Goldengate process.

[oracle:sukumar:/app/oracle/product/ggate]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.



GGSCI (sukumar) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1      00:00:00      00:00:06
EXTRACT     RUNNING     EXT2      00:00:00      00:00:07
EXTRACT     RUNNING     PUMP1     00:00:00      00:00:07
EXTRACT     RUNNING     PUMP2     00:00:00      00:00:02

GGSCI (sukumar) 2> stop *

Sending STOP request to EXTRACT EXT1 ...
Request processed.

Sending STOP request to EXTRACT EXT2 ...
Request processed.

Sending STOP request to EXTRACT PUMP1 ...
Request processed.

Sending STOP request to EXTRACT PUMP2 ...
Request processed.



GGSCI (sukumar) 3> stop manager
Manager process is required by other GGS processes.
Are you sure you want to stop it (y/n)? y

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

Step 4: Un Tar the TAR file under Goldengate home directory

[oracle:sukumar:/app/oracle/product/ggate]$ ls -lrt *.tar
-rw-rw-r-- 1 oracle oinstall 227624960 Jul 18  2013 fbo_ggs_Linux_x64_ora11g_64bit.tar

[oracle:sukumar:/app/oracle/product/ggate]$ tar -xvf fbo_ggs_Linux_x64_ora11g_64bit.tar
UserExitExamples/
UserExitExamples/ExitDemo_more_recs/
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.LINUX
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.SOLARIS
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.HPUX
UserExitExamples/ExitDemo_more_recs/readme.txt
UserExitExamples/ExitDemo_more_recs/exitdemo_more_recs.c
UserExitExamples/ExitDemo_more_recs/exitdemo_more_recs.vcproj
UserExitExamples/ExitDemo_more_recs/Makefile_more_recs.AIX
UserExitExamples/ExitDemo_pk_befores/
UserExitExamples/ExitDemo_pk_befores/exitdemo_pk_befores.c
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.SOLARIS
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.AIX
UserExitExamples/ExitDemo_pk_befores/readme.txt
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.HPUX
UserExitExamples/ExitDemo_pk_befores/exitdemo_pk_befores.vcproj
UserExitExamples/ExitDemo_pk_befores/Makefile_pk_befores.LINUX
UserExitExamples/ExitDemo_lobs/
UserExitExamples/ExitDemo_lobs/Makefile_lob.SOLARIS
UserExitExamples/ExitDemo_lobs/exitdemo_lob.c
UserExitExamples/ExitDemo_lobs/readme.txt
UserExitExamples/ExitDemo_lobs/Makefile_lob.HPUX
UserExitExamples/ExitDemo_lobs/Makefile_lob.LINUX
UserExitExamples/ExitDemo_lobs/Makefile_lob.AIX
UserExitExamples/ExitDemo_lobs/exitdemo_lob.vcproj
UserExitExamples/ExitDemo/
UserExitExamples/ExitDemo/exitdemo.vcproj
UserExitExamples/ExitDemo/Makefile_exit_demo.HP_OSS
UserExitExamples/ExitDemo/Makefile_exit_demo.HPUX
UserExitExamples/ExitDemo/exitdemo_utf16.c
UserExitExamples/ExitDemo/exitdemo.c
UserExitExamples/ExitDemo/readme.txt
UserExitExamples/ExitDemo/Makefile_exit_demo.LINUX
UserExitExamples/ExitDemo/Makefile_exit_demo.SOLARIS
UserExitExamples/ExitDemo/Makefile_exit_demo.AIX
UserExitExamples/ExitDemo_passthru/
UserExitExamples/ExitDemo_passthru/Makefile_passthru.HPUX
UserExitExamples/ExitDemo_passthru/Makefile_passthru.AIX
UserExitExamples/ExitDemo_passthru/Makefile_passthru.LINUX
UserExitExamples/ExitDemo_passthru/readme.txt
UserExitExamples/ExitDemo_passthru/Makefile_passthru.SOLARIS
UserExitExamples/ExitDemo_passthru/Makefile_passthru.HP_OSS
UserExitExamples/ExitDemo_passthru/exitdemo_passthru.c
UserExitExamples/ExitDemo_passthru/exitdemopassthru.vcproj
bcpfmt.tpl
bcrypt.txt
cfg/
cfg/Config.properties
cfg/mpmetadata.xml
cfg/MPMetadataSchema.xsd
cfg/password.properties
cfg/jps-config-jse.xml
cfg/ProfileConfig.xml
chkpt_ora_create.sql
cobgen
convchk
db2cntl.tpl
ddl_cleartrace.sql
ddl_create.sql
ddl_ddl2file.sql
ddl_disable.sql
ddl_enable.sql
ddl_filter.sql
ddl_ora10.sql
ddl_ora10upCommon.sql
ddl_ora11.sql
ddl_ora9.sql
ddl_pin.sql
ddl_remove.sql
ddl_session.sql
ddl_session1.sql
ddl_setup.sql
ddl_status.sql
ddl_staymetadata_off.sql
ddl_staymetadata_on.sql
ddl_trace_off.sql
ddl_trace_on.sql
ddl_tracelevel.sql
ddlcob
defgen
demo_more_ora_create.sql
demo_more_ora_insert.sql
demo_ora_create.sql
demo_ora_insert.sql
demo_ora_lob_create.sql
demo_ora_misc.sql
demo_ora_pk_befores_create.sql
demo_ora_pk_befores_insert.sql
demo_ora_pk_befores_updates.sql
dirjar/
dirjar/spring-security-core-3.0.1.RELEASE.jar
dirjar/jmxremote_optional-1.0-b02.jar
dirjar/org.springframework.core-3.0.0.RELEASE.jar
dirjar/org.springframework.aspects-3.0.0.RELEASE.jar
dirjar/xmlparserv2.jar
dirjar/org.springframework.context-3.0.0.RELEASE.jar
dirjar/jps-patching.jar
dirjar/monitor-common.jar
dirjar/osdt_cert.jar
dirjar/org.springframework.context.support-3.0.0.RELEASE.jar
dirjar/log4j-1.2.15.jar
dirjar/org.springframework.web-3.0.0.RELEASE.jar
dirjar/commons-logging-1.0.4.jar
dirjar/osdt_xmlsec.jar
dirjar/org.springframework.jdbc-3.0.0.RELEASE.jar
dirjar/jps-mbeans.jar
dirjar/jagent.jar
dirjar/org.springframework.asm-3.0.0.RELEASE.jar
dirjar/osdt_core.jar
dirjar/jps-manifest.jar
dirjar/identitystore.jar
dirjar/jsr250-api-1.0.jar
dirjar/jps-common.jar
dirjar/jps-upgrade.jar
dirjar/xpp3_min-1.1.4c.jar
dirjar/org.springframework.transaction-3.0.0.RELEASE.jar
dirjar/org.springframework.instrument-3.0.0.RELEASE.jar
dirjar/jps-unsupported-api.jar
dirjar/fmw_audit.jar
dirjar/org.springframework.beans-3.0.0.RELEASE.jar
dirjar/spring-security-web-3.0.1.RELEASE.jar
dirjar/jps-api.jar
dirjar/jps-internal.jar
dirjar/spring-security-taglibs-3.0.1.RELEASE.jar
dirjar/slf4j-api-1.4.3.jar
dirjar/spring-security-config-3.0.1.RELEASE.jar
dirjar/ldapjclnt11.jar
dirjar/jps-ee.jar
dirjar/xstream-1.3.jar
dirjar/spring-security-acl-3.0.1.RELEASE.jar
dirjar/commons-codec-1.3.jar
dirjar/spring-security-cas-client-3.0.1.RELEASE.jar
dirjar/jps-wls.jar
dirjar/jacc-spi.jar
dirjar/org.springframework.orm-3.0.0.RELEASE.jar
dirjar/org.springframework.expression-3.0.0.RELEASE.jar
dirjar/oraclepki.jar
dirjar/identityutils.jar
dirjar/org.springframework.test-3.0.0.RELEASE.jar
dirjar/slf4j-log4j12-1.4.3.jar
dirjar/jdmkrt-1.0-b02.jar
dirjar/org.springframework.aop-3.0.0.RELEASE.jar
dirprm/
dirprm/jagent.prm
emsclnt
extract
freeBSD.txt
ggMessage.dat
ggcmd
ggsci
help.txt
jagent.sh
keygen
libantlr3c.so
libdb-5.2.so
libgglog.so
libggrepo.so
libicudata.so.38
libicui18n.so.38
libicuuc.so.38
libxerces-c.so.28
libxml2.txt
logdump
marker_remove.sql
marker_setup.sql
marker_status.sql
mgr
notices.txt
oggerr
params.sql
prvtclkm.plb
pw_agent_util.sh
remove_seq.sql
replicat
retrace
reverse
role_setup.sql
sequence.sql
server
sqlldr.tpl
tcperrs
ucharset.h
ulg.sql
usrdecs.h
zlib.txt


Step 5: We are done with the version upgrade of Goldengate, connect and check the version.

[oracle:sukumar:/app/oracle/product/ggate]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.8 17044551 OGGCORE_11.2.1.0.0OGGBP_PLATFORMS_130718.0526_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Jul 18 2013 10:34:27

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.



GGSCI (sukumar) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     EXT1       00:00:00      00:01:41
EXTRACT     STOPPED     EXT2       00:00:00      00:01:41
EXTRACT     STOPPED     PUMP1      00:00:00      00:01:40
EXTRACT     STOPPED     PUMP2      00:00:00      00:01:39


Step 6: Start the manager process followed by all the extract process.

GGSCI (sukumar) 2> start manager

Manager started.


GGSCI (sukumar) 3> start *

Sending START request to MANAGER ...
EXTRACT EXT1 starting

Sending START request to MANAGER ...
EXTRACT EXT2 starting

Sending START request to MANAGER ...
EXTRACT PUMP1 starting

Sending START request to MANAGER ...
EXTRACT PUMP2 starting


GGSCI (sukumar) 4> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:00      00:01:56
EXTRACT     RUNNING     EXT2        00:00:00      00:00:00
EXTRACT     RUNNING     PUMP1       00:00:00      00:01:55
EXTRACT     RUNNING     PUMP2       00:03:19      00:00:00




GoldenGate Active-Active replication using CONFLICTRESOLUTION

$
0
0



Source
Target
Database Version
11.2.0.4
11.2.0.4
OS Version
OEL 6 – 64 Bit
OEL 6 – 64 Bit
OGG HOME
/u01/GG/training/source
/u01/GG/training
GoldenGate user
ggs_admin
ggs_admin
OGG Core
11.2.1.0.0OGGBP_PLATFORMS_140304.2209_FBO
OGG Version
11.2.1.0.20

Database Prerequisites (On Both Source and Target)


Enable GoldenGate Replicaiton  at database level (Applicable for 11.2.0.4 and above Database Versions)

SQL> show parameter enable_goldengate_replication

NAME                                 TYPE        VALUE  
------------------------------------ ----------- --------
enable_goldengate_replication        boolean     TRUE


Keep your database in Archivelog Mode

SQL> select LOG_MODE from v$database;

LOG_MODE
------------
ARCHIVELOG


Enable Supplemental Logging for Primary Key, Unique Index, Foreign Keys and All Data.
Note:supplemental_log_data_min can be IMPLICIT or YES
SQL> SELECT supplemental_log_data_min,
        supplemental_log_data_pk,
        supplemental_log_data_ui,
        supplemental_log_data_fk,
        supplemental_log_data_all
FROM v$database; 

SUPPLEME SUP SUP SUP SUP
-------- --- --- --- ---
NO       NO  NO  NO  NO

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (PRIMARY KEY) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (UNIQUE) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (FOREIGN KEY) COLUMNS;
SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;

SQL> SELECT supplemental_log_data_min,
      supplemental_log_data_pk,
      supplemental_log_data_ui,    
      supplemental_log_data_fk,    
      supplemental_log_data_all
FROM v$database;

SUPPLEME SUP SUP SUP SUP
-------- --- --- --- ---
YES      YES YES YES YES

Source:
GGSCI (ogg1.sukumar.com 1> edit param ./GLOBALS

GGSCHEMA GGS_ADMIN
CHECKPOINTTABLE GGS_ADMIN.CHKPTAB

Target:
GGSCI (ogg2.sukumar.com 1> edit param ./GLOBALS

GGSCHEMA GGS_ADMIN
 CHECKPOINTTABLE GGS_ADMIN.CHKPTAB

Create checkpoint table on both Source and Target
Source:
GGSCI (ogg1.sukumar.com 1> add checkpointtable ggs_admin.CHKPTAB
Successfully created checkpoint table ggs_admin.chkptab.

Target:
GGSCI (ogg2.sukumar.com 1> add checkpointtable ggs_admin.CHKPTAB
Successfully created checkpoint table ggs_admin.chkptab.

Create Parameter Files on Source:
Extract

GGSCI (ogg01.sukumar.com) 5> edit param ext1

EXTRACT ext1
SETENV (NLS_LANG=AMERICAN_AMERICA.UTF8)
SETENV ORACLE_SID=SUKUMAR1
USERID ggs_admin, PASSWORD ggs_admin
EXTTRAIL /u01/GG/training/source/dirdat/e1
TRANLOGOPTIONS DBLOGREADER
TRANLOGOPTIONS EXCLUDEUSER GGS_ADMIN

TABLE ABC.RESERVATION,
GETBEFORECOLS(
ON UPDATE KEYINCLUDING(TIME),
ON DELETE KEYINCLUDING(TIME));

Pump
GGSCI (ogg01.sukumar.com) 6> edit param dpump

EXTRACT dpump
USERID ggs_admin, PASSWORD ggs_admin
RMTHOST ogg02.sukumar.com, MGRPORT 4444
RMTTRAIL /u01/GG/training/dirdat/p1

TABLE ABC.*;











Replicat
GGSCI (ogg01.sukumar.com) 8> edit param rep2

REPLICAT rep1
SETENV (NLS_LANG=AMERICAN_AMERICA.UTF8)
SETENV ORACLE_SID=SUKUMAR1
USERID ggs_admin PASSWORD ggs_admin
DISCARDFILE /u01/GG/training/source/dirrpt/reptr.dsc, APPEND, MEGABYTES 512
ALLOWNOOPUPDATES
ASSUMETARGETDEFS

MAP ABC.RESERVATION, TARGET ABC.RESERVATION,
GETBEFORECOLS
        (
         ON UPDATE KEYINCLUDING (TIME),
         ON DELETE KEYINCLUDING (TIME)
        ), &
RESOLVECONFLICT (UPDATEROWEXISTS, (DEFAULT, USEMIN (TIME))), &
RESOLVECONFLICT (INSERTROWEXISTS, (DEFAULT, USEMIN (TIME))), &
RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)), &
RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)), &
RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD));





















Add extract, pump and Replicat Process to GoldenGate
GGSCI (ogg01.sukumar.com) 9> add extract ext1  tranlog, begin now

GGSCI (ogg01.sukumar.com) 10> add exttrail /u01/GG/training/source/dirdat/e1, extract ext1

GGSCI (ogg01.sukumar.com) 11> add extract dpump, exttrailsource /u01/GG/training/source/dirdat/e1

GGSCI (ogg01.sukumar.com) 12>add rmttrail /u01/GG/training/dirdat/p1, extract dpump

GGSCI (ogg01.sukumar.com) 13> add replicat rep2 exttrail /u01/GG/training/source/dirdat/p2, CHECKPOINTTABLE GGS_ADMIN.CHKPTAB

















 Check the Process status

GGSCI (ogg01.sukumar.com) 14>  info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     DPUMP       00:00:00      00:00:20
EXTRACT     STOPPED     EXT1        00:00:00      00:00:15
REPLICAT    STOPPED     REP2        00:00:00      00:00:11











Start all the process and check the status (All should be in running state)
GGSCI (ogg01.sukumar.com) 14> start *

GGSCI (ogg01.sukumar.com) 15> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING      DPUMP       00:00:00      00:00:03
EXTRACT     RUNNING      EXT1        00:00:00      00:00:00
REPLICAT    RUNNING      REP2        00:00:00      00:00:02













Add TRANDATA on to the table
GGSCI (ogg01.sukumar.com) 5> dblogin userid ggs_admin, password ggs_admin
Successfully logged into database.

GGSCI (ogg01.sukumar.com) 6> add schematrandata abc

2014-08-27 08:15:31  INFO    OGG-01788  SCHEMATRANDATA has been added on schema abc.










Create Parameter Files on Target:

Extract
GGSCI (ogg02.sukumar.com) 6> edit param ext2

EXTRACT ext2
SETENV (NLS_LANG=AMERICAN_AMERICA.UTF8)
SETENV ORACLE_SID=SUKUMAR2
USERID ggs_admin, PASSWORD ggs_admin
EXTTRAIL /u01/GG/training/source/dirdat/e2
TRANLOGOPTIONS DBLOGREADER

TABLE ABC.RESERVATION,
GETBEFORECOLS(
ON UPDATE KEYINCLUDING(TIME),
ON DELETE KEYINCLUDING(TIME));












 Pump
GGSCI (ogg02.sukumar.com) 7> edit param dpump

EXTRACT dpump
USERID ggs_admin, PASSWORD ggs_admin
RMTHOST ogg01.sukumar.com, MGRPORT 8877
RMTTRAIL /u01/GG/training/source/dirdat/p2

TABLE ABC.*;








 Replicat
GGSCI (ogg02.sukumar.com) 5> edit param rep1

REPLICAT rep1
SETENV (NLS_LANG=AMERICAN_AMERICA.UTF8)
SETENV ORACLE_SID=SUKUMAR2
USERID ggs_admin PASSWORD ggs_admin
DISCARDFILE /u01/GG/training/dirrpt/reptr.dsc, APPEND, MEGABYTES 512
ALLOWNOOPUPDATES
ASSUMETARGETDEFS

MAP ABC.RESERVATION, TARGET ABC.RESERVATION,
GETBEFORECOLS
        (
         ON UPDATE KEYINCLUDING (TIME),
         ON DELETE KEYINCLUDING (TIME)
        ), &
RESOLVECONFLICT (UPDATEROWEXISTS, (DEFAULT, USEMIN (TIME))), &
RESOLVECONFLICT (INSERTROWEXISTS, (DEFAULT, USEMIN (TIME))), &
RESOLVECONFLICT (DELETEROWEXISTS, (DEFAULT, OVERWRITE)), &
RESOLVECONFLICT (UPDATEROWMISSING, (DEFAULT, OVERWRITE)), &
RESOLVECONFLICT (DELETEROWMISSING, (DEFAULT, DISCARD));




















Add extract, pump and Replicat Process to GoldenGate
GGSCI (ogg02.sukumar.com) 6> add extract ext2  tranlog, begin now

GGSCI (ogg02.sukumar.com) 7> add exttrail /u01/GG/training/dirdat/e2  extract ext2

GGSCI (ogg02.sukumar.com) 8> add extract dpump,  exttrailsource /u01/GG/training/source/dirdat/e2

GGSCI (ogg02.sukumar.com) 9> add rmttrail /u01/GG/training/source/dirdat/p2  extract dpump

GGSCI (ogg02.sukumar.com) 10> add replicat rep1, exttrail /u01/GG/training/dirdat/p1, CHECKPOINTTABLE GGS_ADMIN.CHKPTAB
















Check the Process status
GGSCI (ogg02.sukumar.com) 11> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     STOPPED     DPUMP       00:00:00      00:00:11
EXTRACT     STOPPED     EXT2        00:00:00      00:00:09
REPLICAT    STOPPED     REP1        00:00:00      00:00:06










Start all the process and check the status(All should be in running state)
GGSCI (ogg02.sukumar.com) 11> start *

GGSCI (ogg02.sukumar.com) 11> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING      DPUMP       00:00:00      00:00:04
EXTRACT     RUNNING      EXT2        00:00:00      00:00:02
REPLICAT    RUNNING      REP1        00:00:00      00:00:00












Add TRANDATA to the table
GGSCI (ogg02.sukumar.com) 5> dblogin userid ggs_admin, password ggs_admin
Successfully logged into database.

GGSCI (ogg02.sukumar.com) 6> add schematrandata abc

2014-08-27 08:15:31  INFO    OGG-01788  SCHEMATRANDATA has been added on schema abc.









 Soruce Table
SQL> desc ABC.RESERVATION

 Name              Null?    Type
 ----------------- -------- ------------
 ID                NOT NULL NUMBER
 NAME                       VARCHAR2(20)
 TIME                       TIMESTAMP(6)









Target Table
SQL> desc ABC.RESERVATION

 Name              Null?    Type
 ----------------- -------- ------------
 ID                NOT NULL NUMBER
 NAME                       VARCHAR2(20)
 TIME                       TIMESTAMP(6)








Let’s test with Inserts from both the Nodes.

SQL> insert into reservation values (1,'ogg1',sysdate);

1 row created.

SQL> commit;

Commit complete.


SQL> insert into reservation values (2,'ogg2', sysdate);

1 row created.

SQL> commit;

Commit complete.


SQL> select * from reservation;

        ID NAME                 TIME
---------- -------------------- ------------------------------
         2 ogg2                 27-AUG-14 09.38.09.000000 AM
         1 ogg1                 27-AUG-14 09.34.02.000000 AM


SQL> select * from reservation;

        ID NAME                 TIME
---------- -------------------- ------------------------------
         1 ogg1                 27-AUG-14 09.34.02.000000 AM
         2 ogg2                 27-AUG-14 09.38.09.000000 AM






Now test with the Update statements,

As we have used the Conflict Resolution as USEMIN reference to TIME column, which ever record have the minimum time will be updated to the table.

As simple as , First come First Serve of COMMIT.

SQL> update reservation set NAME='UPDATED ON OGG1',TIME=sysdate where id=1;

1 row updated.

SQL> select * from reservation where id=1;

        ID NAME                 TIME
---------- -------------------- ------------------------------
         1 UPDATED ON OGG1      27-AUG-14 09.38.14.000000 AM



SQL> update reservation set NAME='UPDATED ON OGG2', TIME=sysdate where id=1;

1 row updated.

SQL> select * from reservation where id=1;

        ID NAME                 TIME
---------- -------------------- ------------------------------
         1 UPDATED ON OGG2      27-AUG-14 09.42.21.000000 AM


























We dint commit the Transaction yet on any of the database, hence it is showing the record with two different times.
but I am commiting the record on ogg2 first then ogg1


SQL> update reservation set NAME='UPDATED ON OGG2', TIME=sysdate where id=1;

1 row updated.

SQL> Commit;

Commit complete.

SQL> update reservation set NAME='UPDATED ON OGG1',TIME=sysdate where id=1;

1 row updated.

SQL> Commit;

Commit complete.


In General Latest update should be effective in the database, As we are using USEMIN to the Time column, First Commit will always be updated when the transaction at the other end is open/ not committed.

SQL> select * from reservation;

        ID NAME                 TIME
---------- -------------------- ------------------------------
         2 ogg2                 27-AUG-14 09.38.09.000000 AM
         1 UPDATED ON OGG2      27-AUG-14 09.42.21.000000 AM

SQL> select * from reservation;

        ID NAME                 TIME
---------- -------------------- ------------------------------
         1 UPDATED ON OGG2      27-AUG-14 09.42.21.000000 AM
         2 ogg2                 27-AUG-14 09.38.09.000000 AM

Now Let’s work with Delete Statements.
Deleting the record from ogg1, keep the transaction open. Then, delete the record from ogg2.
Commit the transaction from ogg1, deletes the record on ogg2 also. If you commit on ogg2 now, in general it should throw no data found error. As we have used DELETEROWMISSING default ignore, transaction will commit successfully even though there is no record in the database.

SQL> delete from reservation where id=1;

1 row deleted.

SQL> delete from reservation where id=1;

1 row deleted.

SQL> commit;

Commit complete.

SQL> commit;

Commit complete.


TIP to change the SQL> prompt.

$
0
0


1) Go to ORACLE_HOME/sqlplus/admin.
2) Add this command to glogin.sql file.      set sqlprompt '_USER>';
3) Connect to database.



############################     DEMO   ############################

SQL> exit;


[oracle@demo]$ cd $ORACLE_HOME/sqlplus/admin
[oracle@demo]$ vi glogin.sql

set sqlprompt '_USER>';


[oracle@demo]$ sqlplus dbauser/password

SQL*Plus: Release 11.2.0.3.0 Production on Fri Aug 23 16:28:08 2013
Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options


DBAUSER>



###################################################################
SQL PROMPT OPTIONS

_connect_identifier    will display connection identifier.
_date                  will display date.
_editor                will display editor name used by the EDIT command.
_o_version             will display Oracle version.
_o_release             will display Oracle release.
_privilege             will display privilege such as SYSDBA, SYSOPER, SYSASM
_sqlplus_release       will display SQL*PLUS release.
_user                  will display current user name.




MySQL Standby Creation (Master - Slave Replication)

$
0
0


Contents

. 0

MySQL standby Creation (Master - Slave Replication). 2

Environment Details:2

Configuration Steps on Master Node. 2

Step 1: Mandatory  parameters (/etc/my.cnf). 2
Step 2: User Creation. 2
Step 3: Master Status. 3

Configuration Steps on Slave Node. 3

Step 4: Mandatory Parameters (/etc/my.cnf). 3
Step 5: Connection Testing (Slave to Master). 3
Step 6:  Configuration of Slave process. 3
Step 7:  Start Slave. 4
Step 8: Slave Status. 4
Step 9: Test the Replication. 6

Switchover (Slave to Master). 6

    Master Node. 6
    Slave Node. 6



MySQL standby Creation (Master - Slave Replication)

Environment Details:


Master Node
10.0.0.81
Master Node Hostname              
mysqlnode1.sukumar.com


Slave Node
10.0.0.82
Slave Node Hostname
mysqlnode2.sukumar.com

Configuration Steps on Master Node

Step 1: Mandatory  parameters (/etc/my.cnf)

                                                                       
Below Listed parameters are mandatory for Master node.  Make sure to set the unique server-id

[mysql@mysqlnode1 ~]$ cat /etc/my.cnf
[mysqld]
log-bin=/var/lib/mysql/mysql-bin
max_binlog_size=4096
binlog_format=row
socket=/var/lib/mysql/mysql.sock
server-id=1
binlog_do_db=demo
binlog-ignore-db=mysql
binlog-ignore-db=test

[client]
socket=/var/lib/mysql/mysql.sock

[mysqld_safe]
err-log=/var/log/mysqld-node1.log

Step 2: User Creation


Connect to Mysql and create dedicated user for replication. Grant the required privileges to connect from slave node.

mysql> grant replication slave on *.* to
    -> 'rep_user'@'10.0.0.82' identified by 'rep_user';
Query OK, 0 rows affected (0.00 sec)

mysql> select user, host from mysql.user
    -> where user='rep_user';
+----------+-----------+
| user     | host      |
+----------+-----------+
| rep_user | 10.0.0.82 |
+----------+-----------+
1 row in set (0.00 sec)

Step 3: Master Status


 Check the master status

mysql> show master status;
+------------------+----------+--------------+----------- --+-----------------+
| File         | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set
+------------------+----------+--------------+--------------+-----------------+
| mysql-bin.000005 |      120 | demo         | mysql,test   |                  
+------------------+----------+--------------+--------------+-----------------+
1 row in set (0.00 sec)

Configuration Steps on Slave Node

Step 4: Mandatory Parameters (/etc/my.cnf)


 Below Listed parameters are mandatory for Slave node.  Make sure to set the unique server-id

mysql@mysqlnode2 ~]$ cat /etc/my.cnf
[mysqld]
log-bin=/var/lib/mysql/mysql2-bin
max_binlog_size=4096
binlog_format=row
socket=/var/lib/mysql/mysql.sock
server-id=2

[client]
socket=/var/lib/mysql/mysql.sock

Step 5: Connection Testing (Slave to Master)


Test the connection from slave node to Master node by using the below command.

mysql@mysqlnode2 ~]$ mysql -u rep_user -h mysqlnode1.sukumar.com -prep_user demo

Step 6:  Configuration of Slave process


This will configure slave and server will remember settings, so this replaces my.cnf settings in latest versions of MySQL server.
Note: Set the values appropriate with respect to the Master Node. Master_log_file and position values should be from master status on master node.

mysql@mysqlnode2 ~]$ mysql -u root -pwelcome123 demo

mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.81',
       MASTER_USER='rep_user',
       MASTER_PASSWORD='rep_user',
       MASTER_PORT=3306,
       MASTER_LOG_FILE='mysql-bin.000005',
       MASTER_LOG_POS=120,
       MASTER_CONNECT_RETRY=10;

Step 7:  Start Slave


Start the slave process with the below command.

mysql> start slave;
ERROR 1872 (HY000): Slave failed to initialize relay log info structure from the repository

If you are receiving “ERROR 1872 (HY000): Slave failed to initialize relay log info structure from the repository” error. please reset the slave and proceed with step 6.

mysql> reset slave;
Query OK, 0 rows affected (0.00 sec)

mysql> CHANGE MASTER TO MASTER_HOST='10.0.0.81', MASTER_USER='rep_user', MASTER_PASSWORD='rep_user', MASTER_PORT=3306, MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=120, MASTER_CONNECT_RETRY=10;
Query OK, 0 rows affected, 2 warnings (0.05 sec)

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

Step 8: Slave Status


Make sure to check the below parameter status should be "YES" and the remaining values are appropriate.

             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes


mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 10.0.0.81
                  Master_User: rep_user
                  Master_Port: 3306
                Connect_Retry: 10
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 120
               Relay_Log_File: mysqlnode2-relay-bin.000002
                Relay_Log_Pos: 283
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 120
              Relay_Log_Space: 461
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: 8a701d66-2119-11e3-9ab2-0689f6cf2c77
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set:
            Executed_Gtid_Set:
                Auto_Position: 0

Step 9: Test the Replication


Perform the transactions on master node and slave node will automatically be in sync.

Switchover (Slave to Master)


Master Node


FLUSH LOGScloses and reopens all log files. If binary logging is enabled, the sequence number of the binary log file is incremented by one relative to the previous file.

FLUSH LOGS;

Slave Node


Stop the slave process and reset the master. This will configure the master and it act according to my.cnf configuration settings.

STOP SLAVE;
RESET MASTER;

Degree of Parallelism - 11gR2 feature

$
0
0

 Pre-Requisite

SQL> sho parameter parallel_degree
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_limit                string      CPU
parallel_degree_policy               string      MANUAL


If Parallel_servers_target is less than parallel_max_servers, parallel statement queuing can occur, if not,
it will not because the parallel_servers_target limit will be reached before Auto DOP queuing logic kicks in.
SQL> sho parameter parallel_servers_target
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_servers_target              integer     16


SQL> sho parameter parallel_max_servers
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_max_servers                 integer     40

Test Table Creation

Created a table test_dop with dba_objects data.
SQL> sho parameter parallel_degree_policy
NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy               string      MANUAL

SQL> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------
Plan hash value: 381934326

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |     1 |   207 |   298   (0)| 00:00:04 |
|   1 |  TABLE ACCESS FULL| TEST_DOP |     1 |   207 |   298   (0)| 00:00:04 |
------------------------------------------------------------------------------
Note
-----
   - dynamic sampling used for this statement (level=2)

Explain plan with Parallel hint

Check the Explain Plan by passing the Parallel Hint.
SQL> explain plan for select /*+ parallel (test_dop,8) */ * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------
Plan hash value: 365997929

---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |     1 |   207 |    41   (0)| 00:00:01 |
|   1 |  PX COORDINATOR      |          |       |       |            |          |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |     1 |   207 |    41   (0)| 00:00:01 |
|   3 |    PX BLOCK ITERATOR |          |     1 |   207 |    41   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| TEST_DOP |     1 |   207 |    41   (0)| 00:00:01 |
---------------------------------------------------------------------------------

  - dynamic sampling used for this statement (level=2)


Explain plan with DOP


Check the Explain Plan by enabling the DOP, but received an error.
SQL> alter system set parallel_degree_policy=auto;

System altered.

SQL> sho parameter parallel_degree_policy

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
parallel_degree_policy               string      AUTO


SQl> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql


PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------
Plan hash value: 381934326

------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |     1 |   207 |   298   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| TEST_DOP |     1 |   207 |   298   (0)| 00:00:01 |
------------------------------------------------------------------------------

Note
-----
   - dynamic sampling used for this statement (level=2)
   - automatic DOP: skipped because of IO calibrate statistics are missing


Stats Collection

Collect the schema stats and tried again. (Number of Rows has been increased)
SQL>  EXEC DBMS_STATS.GATHER_TABLE_STATS ('sukku', 'test_dop');

PL/SQL procedure successfully completed.

 
SQl> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
Plan hash value: 381934326
------------------------------------------------------------------------------
| Id  | Operation         | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |          |    51 |  4590 |   298   (0)| 00:00:01 |
|   1 |  TABLE ACCESS FULL| TEST_DOP |    51 |  4590 |   298   (0)| 00:00:01 |
------------------------------------------------------------------------------

Note
-----
   - automatic DOP: skipped because of IO calibrate statistics are missing

There is no use of enabling the DOP as it is still skipping it because of IO calibrate statistics are missing.
  

IO Calibration

 Make sure IO_CALIBRATION_STATUS should be READY.
SQL> select status from V$IO_CALIBRATION_STATUS;

STATUS
-------------
NOT AVAILABLE

Make sure the below parameter settings, to make the IO_CALIBRATION_STAUS ready.
disk_asynch_io = true
filesystemio_options = asynch


SQL> sho parameter disk_asynch_io

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
disk_asynch_io                       boolean     TRUE

SQL> sho parameter filesystemio_options

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      none


SQL> alter system set filesystemio_options=asynch scope=spfile;

System altered.


Database Restart

Bounce the DB as it is mandatory for this parameter change.
SQL> shut immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup
ORACLE instance started.
Database mounted.
Database opened.


SQL> sho parameter filesystemio_options

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
filesystemio_options                 string      ASYNCH


Use the below PL-SQL code to change the IO_CALIBRATION_STATUS to READY
SET SERVEROUTPUT ON
DECLARE
  lat  INTEGER;
  iops INTEGER;
  mbps INTEGER;
BEGIN
-- DBMS_RESOURCE_MANAGER.CALIBRATE_IO (<DISKS>, <MAX_LATENCY>, iops, mbps, lat);
   DBMS_RESOURCE_MANAGER.CALIBRATE_IO (2, 10, iops, mbps, lat);

  DBMS_OUTPUT.PUT_LINE ('max_iops = ' || iops);
  DBMS_OUTPUT.PUT_LINE ('latency  = ' || lat);
  DBMS_OUTPUT.PUT_LINE ('max_mbps = ' || mbps);
end;
/

Output should be like below.
max_iops = 89
latency  = 10
max_mbps = 38

PL/SQL procedure successfully completed.


SQL> select status from v$IO_CALIBRATION_STATUS;

STATUS
-------------
READY


DOP enabled

Without passing the parallel hint, we have achieved the Query parallelism by enabling the DOP.

SQL> explain plan for select * from test_dop;
SQL> @$ORACLE_HOME/rdbms/admin/utlxpls.sql

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------
Plan hash value: 365997929
---------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    51 |  4590 |   166   (0)| 00:00:01 |
|   1 |  PX COORDINATOR      |          |       |       |            |          |
|   2 |   PX SEND QC (RANDOM)| :TQ10000 |    51 |  4590 |   166   (0)| 00:00:01 |
|   3 |    PX BLOCK ITERATOR |          |    51 |  4590 |   166   (0)| 00:00:01 |
|   4 |     TABLE ACCESS FULL| TEST_DOP |    51 |  4590 |   166   (0)| 00:00:01 |
---------------------------------------------------------------------------------

Note
-----
   - automatic DOP: Computed Degree of Parallelism is 2

Viewing all 32 articles
Browse latest View live