Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

12c New Feature – RMAN RECOVER TABLE

$
0
0

One of the good new features in Oracle 12c is the ability to restore a single table or a single partition of a partitioned table from an RMAN backup via the RECOVER TABLE command. Prior to 12c restoring a table was a long drawn out and difficult affair.

So when may we use this fearure?

There is TSPITR ( Tablespace point-in-time recovery) but what if we only want to restore a single table or subset of tables and the tablespace has a large number of tables.

Tables have been logically corrupted or records wrongly purged and we cannot use the FLASHBACK  TABLE feature because we do not have enough undo data available to go back to the required point in time.

FLASHBACK DATABASE has not been turned on and remember even if it was turned on, if we are going to use flashback database just to recover some tables, the entire database is being put to a point in time in the past which is not very desirable.

So what does the RMAN RECOVER TABLE command do behind the scenes?

 

It creates an auxiliary database or instance which is used to recover the tables to a specific point in time. This database will contain a few system related data files like SYSTEM, SYSAUX, UNDO  and  data files belonging to the tablespace containing the tables we are looking to restore.

In this example it restores SYSTEM,SYSAUX,UNDO  tablespaces for CDB and SYSTEM and SYSAUX and EXAMPLE tablespace  for PDB

Then it creates a Data Pump export dump file which will contain the recovered table or partitions of tables.

It will then import the data into the target database using Data Pump Import.

Finally it will remove the temporary auxiliary instance.

 

Let us see an example where we drop the SH.CUSTOMERS table and test the restore and recovery using the new 12c RMAN RECOVER TABLE feature

In this example we have a container database (CDB) called cdb12c and there is a pluggable database (PDB) called testdb1. The SH.CUSTOMERS table is contained in the EXAMPLE tablespace and this is part of the TESTDB1 PDB.

Note that in 12c, when we connect to the root container CDB and issue the BACKUP DATABASE command, all the PDBs also get backed up automatically. But if we want we can do the recovery at the PDB level if required as we will see in this case.

 

This is the RMAN command we will be using:

 

RECOVER TABLE sh.customers OF PLUGGABLE DATABASE testdb1

UNTIL TIME ‘sysdate-10/1440′

AUXILIARY DESTINATION ‘/u01/app/oracle/backup/cdb12c’

DATAPUMP DESTINATION ‘/u01/app/oracle/backup/cdb12c’;

 

We are recovering the table to a point in time 10 minutes ago (just before we dropped the table), and specifying a location on disks where the auxiliary database files will be located as well as the Data Pump export files.

We can see the output of the RMAN RECOVER TABLE command below and I have highlighted some of the key steps being performed.

Note that these are the datafiles being restored of the CDB:

data file #1, 4 and 3

These are the datafiles being restored of the PDB:

data file #8,9 and 11

SQL> select count(*) from customers;

  COUNT(*)
----------
     55500

SQL> drop table customers cascade constraints;

Table dropped.

 

RMAN> RECOVER TABLE sh.customers OF PLUGGABLE DATABASE testdb1

UNTIL TIME ‘sysdate-10/1440′

AUXILIARY DESTINATION ‘/u01/app/oracle/backup/cdb12c’

DATAPUMP DESTINATION ‘/u01/app/oracle/backup/cdb12c’;

Starting recover at 25-JUL-13

using channel ORA_DISK_1

RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments

Tablespace SYSTEM

Tablespace UNDOTBS1

Creating automatic instance, with SID=’vpwz’

initialization parameters used for automatic instance:
db_name=CDB12C
db_unique_name=vpwz_pitr_testdb1_CDB12C
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
db_create_file_dest=/u01/app/oracle/backup/cdb12c
log_archive_dest_1=’location=/u01/app/oracle/backup/cdb12c’
enable_pluggable_database=true
_clone_one_pdb_recovery=true
#No auxiliary parameter file used

starting up automatic instance CDB12C

Oracle instance started

Total System Global Area 1068937216 bytes

Fixed Size 2296576 bytes
Variable Size 281019648 bytes
Database Buffers 780140544 bytes
Redo Buffers 5480448 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until time “sysdate-10/1440″;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone ‘alter database mount clone database’;
# archive current online log
sql ‘alter system archive log current’;
}
executing Memory Script

executing command: SET until clause

Starting restore at 25-JUL-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=82 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CDB12C/autobackup/2013_07_24/o1_mf_s_821611376_8yyc3jb2_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CDB12C/autobackup/2013_07_24/o1_mf_s_821611376_8yyc3jb2_.bkp tag=TAG20130724T092256
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/backup/cdb12c/CDB12C/controlfile/o1_mf_8z11h238_.ctl
Finished restore at 25-JUL-13

sql statement: alter database mount clone database

sql statement: alter system archive log current

contents of Memory Script:
{
# set requested point in time
set until time “sysdate-10/1440″;
# set destinations for recovery set and auxiliary set datafiles
set newname for clone datafile 1 to new;
set newname for clone datafile 4 to new;
set newname for clone datafile 3 to new;
set newname for clone datafile 8 to new;
set newname for clone datafile 9 to new;
set newname for clone tempfile 1 to new;
set newname for clone tempfile 3 to new;
# switch all tempfiles
switch clone tempfile all;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 1, 4, 3, 8, 9;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

renamed tempfile 1 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_temp_%u_.tmp in control file
renamed tempfile 3 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_temp_%u_.tmp in control file

Starting restore at 25-JUL-13
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_undotbs1_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/cdb12c/bkp_06ofhicn_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/cdb12c/bkp_06ofhicn_1_1 tag=TAG20130724T091503
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:35
channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00008 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_system_%u_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00009 to /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_sysaux_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/cdb12c/bkp_0cofhir0_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/cdb12c/bkp_0cofhir0_1_1 tag=TAG20130724T092240
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 25-JUL-13

datafile 1 switched to datafile copy
input datafile copy RECID=12 STAMP=821699866 file name=/u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_system_8z11h87b_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=13 STAMP=821699866 file name=/u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_undotbs1_8z11h87f_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=14 STAMP=821699866 file name=/u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_sysaux_8z11h875_.dbf
datafile 8 switched to datafile copy
input datafile copy RECID=15 STAMP=821699866 file name=/u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_system_8z11jcbx_.dbf
datafile 9 switched to datafile copy
input datafile copy RECID=16 STAMP=821699866 file name=/u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_sysaux_8z11jcbr_.dbf

contents of Memory Script:
{
# set requested point in time
set until time “sysdate-10/1440″;
# online the datafiles restored or switched
sql clone “alter database datafile 1 online”;
sql clone “alter database datafile 4 online”;
sql clone “alter database datafile 3 online”;
sql clone ‘TESTDB1′ “alter database datafile
8 online”;
sql clone ‘TESTDB1′ “alter database datafile
9 online”;
# recover and open database read only
recover clone database tablespace “SYSTEM”, “UNDOTBS1″, “SYSAUX”, “TESTDB1″:”SYSTEM”, “TESTDB1″:”SYSAUX”;
sql clone ‘alter database open read only’;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 1 online

sql statement: alter database datafile 4 online

sql statement: alter database datafile 3 online

sql statement: alter database datafile 8 online

sql statement: alter database datafile 9 online

Starting recover at 25-JUL-13
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 150 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_150_8yybs4c0_.arc
archived log for thread 1 with sequence 151 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_151_8yyf1pr7_.arc
archived log for thread 1 with sequence 152 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_152_8yyf26kh_.arc
archived log for thread 1 with sequence 153 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_25/o1_mf_1_153_8z119t3l_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_150_8yybs4c0_.arc thread=1 sequence=150
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_151_8yyf1pr7_.arc thread=1 sequence=151
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_152_8yyf26kh_.arc thread=1 sequence=152
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_25/o1_mf_1_153_8z119t3l_.arc thread=1 sequence=153
media recovery complete, elapsed time: 00:00:03
Finished recover at 25-JUL-13

sql statement: alter database open read only

contents of Memory Script:
{
sql clone ‘alter pluggable database TESTDB1 open read only’;
}
executing Memory Script

sql statement: alter pluggable database TESTDB1 open read only

contents of Memory Script:
{
sql clone “create spfile from memory”;
shutdown clone immediate;
startup clone nomount;
sql clone “alter system set control_files =
”/u01/app/oracle/backup/cdb12c/CDB12C/controlfile/o1_mf_8z11h238_.ctl” comment=
”RMAN set” scope=spfile”;
shutdown clone immediate;
startup clone nomount;
# mount database
sql clone ‘alter database mount clone database’;
}
executing Memory Script

sql statement: create spfile from memory

database closed
database dismounted
Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1068937216 bytes

Fixed Size 2296576 bytes
Variable Size 285213952 bytes
Database Buffers 775946240 bytes
Redo Buffers 5480448 bytes

sql statement: alter system set control_files = ”/u01/app/oracle/backup/cdb12c/CDB12C/controlfile/o1_mf_8z11h238_.ctl” comment= ”RMAN set” scope=spfile

Oracle instance shut down

connected to auxiliary database (not started)
Oracle instance started

Total System Global Area 1068937216 bytes

Fixed Size 2296576 bytes
Variable Size 285213952 bytes
Database Buffers 775946240 bytes
Redo Buffers 5480448 bytes

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until time “sysdate-10/1440″;
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile 11 to new;
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile 11;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

executing command: SET NEWNAME

Starting restore at 25-JUL-13
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=11 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00011 to /u01/app/oracle/backup/cdb12c/VPWZ_PITR_TESTDB1_CDB12C/datafile/o1_mf_example_%u_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/backup/cdb12c/bkp_0cofhir0_1_1
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/backup/cdb12c/bkp_0cofhir0_1_1 tag=TAG20130724T092240
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 25-JUL-13

datafile 11 switched to datafile copy
input datafile copy RECID=18 STAMP=821699904 file name=/u01/app/oracle/backup/cdb12c/VPWZ_PITR_TESTDB1_CDB12C/datafile/o1_mf_example_8z11kry7_.dbf

contents of Memory Script:
{
# set requested point in time
set until time “sysdate-10/1440″;
# online the datafiles restored or switched
sql clone ‘TESTDB1′ “alter database datafile
11 online”;
# recover and open resetlogs
recover clone database tablespace “TESTDB1″:”EXAMPLE”, “SYSTEM”, “UNDOTBS1″, “SYSAUX”, “TESTDB1″:”SYSTEM”, “TESTDB1″:”SYSAUX” delete archivelog;
alter clone database open resetlogs;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile 11 online

Starting recover at 25-JUL-13
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 151 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_151_8yyf1pr7_.arc
archived log for thread 1 with sequence 152 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_152_8yyf26kh_.arc
archived log for thread 1 with sequence 153 is already on disk as file /u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_25/o1_mf_1_153_8z119t3l_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_151_8yyf1pr7_.arc thread=1 sequence=151
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_24/o1_mf_1_152_8yyf26kh_.arc thread=1 sequence=152
archived log file name=/u01/app/oracle/fast_recovery_area/CDB12C/archivelog/2013_07_25/o1_mf_1_153_8z119t3l_.arc thread=1 sequence=153
media recovery complete, elapsed time: 00:00:00
Finished recover at 25-JUL-13

database opened

contents of Memory Script:
{
sql clone ‘alter pluggable database TESTDB1 open’;
}
executing Memory Script

sql statement: alter pluggable database TESTDB1 open

contents of Memory Script:
{
# create directory for datapump import
sql ‘TESTDB1′ “create or replace directory
TSPITR_DIROBJ_DPDIR as ”
/u01/app/oracle/backup/cdb12c””;
# create directory for datapump export
sql clone ‘TESTDB1′ “create or replace directory
TSPITR_DIROBJ_DPDIR as ”
/u01/app/oracle/backup/cdb12c””;
}
executing Memory Script

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ”/u01/app/oracle/backup/cdb12c”

sql statement: create or replace directory TSPITR_DIROBJ_DPDIR as ”/u01/app/oracle/backup/cdb12c”

Performing export of tables…
EXPDP> Starting “SYS”.”TSPITR_EXP_vpwz_dvsz”:
EXPDP> Estimate in progress using BLOCKS method…
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
EXPDP> Total estimation using BLOCKS method: 13 MB
EXPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
EXPDP> Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
EXPDP> Processing object type TABLE_EXPORT/TABLE/COMMENT
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
EXPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX
EXPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
EXPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
EXPDP> . . exported “SH”.”CUSTOMERS” 14.57 KB 55500 rows
EXPDP> Master table “SYS”.”TSPITR_EXP_vpwz_dvsz” successfully loaded/unloaded
EXPDP> ******************************************************************************
EXPDP> Dump file set for SYS.TSPITR_EXP_vpwz_dvsz is:
EXPDP> /u01/app/oracle/backup/cdb12c/tspitr_vpwz_91117.dmp
EXPDP> Job “SYS”.”TSPITR_EXP_vpwz_dvsz” successfully completed at Thu Jul 25 09:59:36 2013 elapsed 0 00:00:57
Export completed

contents of Memory Script:
{
# shutdown clone before import
shutdown clone abort
}
executing Memory Script

Oracle instance shut down

Performing import of tables…
IMPDP> Master table “SYS”.”TSPITR_IMP_vpwz_DEsg” successfully loaded/unloaded
IMPDP> Starting “SYS”.”TSPITR_IMP_vpwz_DEsg”:
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE
IMPDP> Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
IMPDP> . . imported “SH”.”CUSTOMERS” 14.57 KB 55500 rows
IMPDP> Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
IMPDP> Processing object type TABLE_EXPORT/TABLE/COMMENT
IMPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
IMPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
IMPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
IMPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/BITMAP_INDEX/INDEX
IMPDP> Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/BITMAP_INDEX/INDEX_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
IMPDP> Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
IMPDP> Job “SYS”.”TSPITR_IMP_vpwz_DEsg” successfully completed at Thu Jul 25 10:00:09 2013 elapsed 0 00:00:25
Import completed

Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_temp_8z11k004_.tmp deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_temp_8z11jyob_.tmp deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/VPWZ_PITR_TESTDB1_CDB12C/onlinelog/o1_mf_3_8z11l3bd_.log deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/VPWZ_PITR_TESTDB1_CDB12C/onlinelog/o1_mf_2_8z11l2ot_.log deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/VPWZ_PITR_TESTDB1_CDB12C/onlinelog/o1_mf_1_8z11l220_.log deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/VPWZ_PITR_TESTDB1_CDB12C/datafile/o1_mf_example_8z11kry7_.dbf deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_sysaux_8z11jcbr_.dbf deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_system_8z11jcbx_.dbf deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_sysaux_8z11h875_.dbf deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_undotbs1_8z11h87f_.dbf deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/datafile/o1_mf_system_8z11h87b_.dbf deleted
auxiliary instance file /u01/app/oracle/backup/cdb12c/CDB12C/controlfile/o1_mf_8z11h238_.ctl deleted
auxiliary instance file tspitr_vpwz_91117.dmp deleted
Finished recover at 25-JUL-13

 

TEST!

 

SQL> select count(*) from customers;

  COUNT(*)
----------
     55500

Minimal downtime rolling database upgrade to 12c Release 1

$
0
0

This note describes the procedure used to perform a rolling database upgrade from 11.2.0.3 to Oracle 12c Release 1 using a Data Guard physical standby database and transient logical standby database.

The average time to perform a database upgrade would be in the region of one to two hours and for many organizations even that amount of downtime would in some cases not be possible or would lead to a significant financial implication because of a database outage.

The rolling upgrade procedure greatly reduces the downtime for an upgrade from hours to a few minutes which is the duration in which a database switchover can be performed.

At a high level these are the steps involved in the  rolling upgrade process

  • Start with the 11.2.0.3 Data Guard physical standby database and convert that to a transient logical standby database. Users are still connected to  primary database.
  • Upgrade the transient logical standby database to 12.1.0.1
  • The transient logical standby process uses SQL Apply to take redo generated by a database running a lower Oracle version (11.2.0.3) , and apply the redo to a standby database running on a higher Oracle version (12.1.0.1)
  • Perform a switchover so that the original primary database now becomes a physical standby database
  • Use Redo Apply to synchronize (and upgrade) the original primary database with the new upgraded primary database
  • Perform another switchover to revert the databases to their former roles.

 

Oracle provides a Bourne shell script (physru) which really does automate a lot of the rolling upgrade process and is available for download from MOS via the note – Database Rolling Upgrade Shell Script (Doc ID 949322.1).

The DBA only has a few tasks to perform as the physru script handles the rolling upgrade process.

  •  Upgrade the standby database using DBUA or manual upgrade.
  • Start the upgraded standby database in the new Oracle 12c home
  • Start the original primary database in the new Oracle 12c home

 

The physru script accepts six parameters as shown below.

$./physru <sysdba user> <primary TNS alias> <physical standby TNS alias> <primary db unique name> <physical standby db unique name> <target version>

We need to provide the SYSDBA password  and can run this from either the primary database server or from the node hosting the standby database as long as SQL*Net connectivity is available from that node to both the databases involved in the rolling upgrade.

We need to execute the script 3 times and let us see what happens at each stage.

 

First execution

Create control file backups for both the primary and the target physical standby database

Creates Guaranteed Restore Points (GRP) on both the primary database and the physical standby database that can be used to flashback to beginning of the process or any other  intermediate steps along the way.

Converts a physical standby into a transient logical standby database.

 

Second execution

 

Use SQL apply to synchronize the transient logical standby database and make it current with the primary

Performs a switchover to the upgraded 12c transient logical standby and  the standby database becomes the primary

Performs a flashback on the original primary database to the initial Guaranteed Restore Point  and converts the original primary into a physical standby

 

Third execution

 

Starts Redo Apply on the new physical standby database (the original primary database) to apply all redo that has been generated during the rolling upgrade process, including any SQL statements that have been executed on the transient logical standby as part of the upgrade.

When synchronized, the script offers the option of performing a final switchover to return the databases to their original roles of primary and standby, but now on the new 12c database software version.

Removes all Guaranteed Restore Points

Prerequisites

Data Guard primary and physical standby database environment exists

Flashback database is enabled on both Primary and Standby database

If Data Guard Broker is managing the configuration, then it has to be disabled for the duration of the upgrade process (by setting the initialization parameter DG_BROKER_START=FALSE)

Ensure that the log transport (initialization parameter LOG_ARCHIVE_DEST_n) is correctly configured to perform a switchover from the primary database to the target physical standby database and back.

Static entries defined in the listener.ora file on both Primary as well as Standby database nodes for the databases directly involved the rolling upgrade process.

Oracle 12.1.0.1.0 software has already been installed on both the primary as well as standby database servers

 

Let us now see an example.

 

In this case the primary database is TESTDB and the physical standby database is TESTDBS,

The DB_UNIQUE_NAME of the primary and standby is also TESTDB and TESTDBS

The original version is 11.2.0,3 and we are upgrading to 12.1.0.1.

We have enabled the Flashback database on both Primary as well as Standby database

Added static entries in the listener.ora on both sites and then reloaded the listener.

For example on the Primary site:

 

(SID_DESC=

(SID_NAME=testdb)

(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)

(GLOBAL_DBNAME=testdb)

)

 

And on the Standby site:

 

(SID_DESC=

(SID_NAME=testdb)

(ORACLE_HOME = /u01/app/oracle/product/11.2.0/dbhome_2)

(GLOBAL_DBNAME=testdbs)

)

The tnsnames.ora on both Primary as well as Standby sites have entries for TESTDB and TESTDBS.

Important – before starting the operation,  do a tnsping from both sites and ensure that the TNS aliases are being resolved

 

Stop managed recovery and shutdown the Standby database.

Mount the standby database

 

Now run physru script – Execution One 

Note – we can run the script from either the Primary or Standby site – but in this example we are running it from the Primary site for all the three executions of the script

 

[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Jul 31 08:09:06 2013 [0-1] Identifying rdbms software version
Jul 31 08:09:06 2013 [0-1] database testdb is at version 11.2.0.3.0
Jul 31 08:09:06 2013 [0-1] database testdbs is at version 11.2.0.3.0
Jul 31 08:09:07 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Jul 31 08:09:07 2013 [0-1] verifying available flashback restore points
Jul 31 08:09:07 2013 [0-1] verifying DG Broker is disabled
Jul 31 08:09:07 2013 [0-1] looking up prior execution history
Jul 31 08:09:07 2013 [0-1] purging script execution state from database testdb
Jul 31 08:09:07 2013 [0-1] purging script execution state from database testdbs
Jul 31 08:09:07 2013 [0-1] starting new execution of script

### Stage 1: Backup user environment in case rolling upgrade is aborted
Jul 31 08:09:08 2013 [1-1] stopping media recovery on testdbs
Jul 31 08:09:09 2013 [1-1] creating restore point PRU_0000_0001 on database testdbs
Jul 31 08:09:09 2013 [1-1] backing up current control file on testdbs
Jul 31 08:09:09 2013 [1-1] created backup control file /u01/app/oracle/product/11.2.0/dbhome_2/dbs/PRU_0001_testdbs_f.f
Jul 31 08:09:09 2013 [1-1] creating restore point PRU_0000_0001 on database testdb
Jul 31 08:09:09 2013 [1-1] backing up current control file on testdb
Jul 31 08:09:09 2013 [1-1] created backup control file /u01/app/oracle/product/11.2.0/dbhome_2/dbs/PRU_0001_testdb_f.f

NOTE: Restore point PRU_0000_0001 and backup control file PRU_0001_testdbs_f.f
      can be used to restore testdbs back to its original state as a
      physical standby, in case the rolling upgrade operation needs to be aborted
      prior to the first switchover done in Stage 4.

### Stage 2: Create transient logical standby from existing physical standby
Jul 31 08:12:43 2013 [2-1] verifying RAC is disabled at testdbs
Jul 31 08:12:43 2013 [2-1] verifying database roles
Jul 31 08:12:43 2013 [2-1] verifying physical standby is mounted
Jul 31 08:12:43 2013 [2-1] verifying database protection mode
Jul 31 08:12:43 2013 [2-1] verifying transient logical standby datatype support

WARN: Objects have been identified on the primary database which will not be
      replicated on the transient logical standby.  The complete list of
      objects and their associated unsupported datatypes can be found in the
      dba_logstdby_unsupported view.  For convenience, this script has written
      the contents of this view to a file - physru_unsupported.log.

      Various options exist to deal with these objects such as:
        - disabling applications that modify these objects
        - manually resolving these objects after the upgrade
        - extending support to these objects (see metalink note: 559353.1)

      If you need time to review these options, you should enter 'n' to exit
      the script.  Otherwise, you should enter 'y' to continue with the
      rolling upgrade.

Are you ready to proceed with the rolling upgrade? (y/n): y

Jul 31 08:13:37 2013 [2-1] continuing
Jul 31 08:13:37 2013 [2-2] starting media recovery on testdbs
Jul 31 08:13:43 2013 [2-2] confirming media recovery is running
Jul 31 08:13:45 2013 [2-2] waiting for v$dataguard_stats view to initialize
Jul 31 08:13:51 2013 [2-2] waiting for apply lag on testdbs to fall below 30 seconds
Jul 31 08:13:51 2013 [2-2] apply lag is now less than 30 seconds
Jul 31 08:13:52 2013 [2-2] stopping media recovery on testdbs
Jul 31 08:13:53 2013 [2-2] executing dbms_logstdby.build on database testdb
Jul 31 08:14:00 2013 [2-2] converting physical standby into transient logical standby
Jul 31 08:14:06 2013 [2-3] opening database testdbs
Jul 31 08:14:10 2013 [2-4] configuring transient logical standby parameters for rolling upgrade
Jul 31 08:14:10 2013 [2-4] starting logical standby on database testdbs
Jul 31 08:14:16 2013 [2-4] waiting until logminer dictionary has fully loaded
Jul 31 08:16:28 2013 [2-4] dictionary load 42% complete
Jul 31 08:16:38 2013 [2-4] dictionary load 74% complete
Jul 31 08:16:48 2013 [2-4] dictionary load 75% complete
Jul 31 08:21:30 2013 [2-4] dictionary load is complete
Jul 31 08:21:31 2013 [2-4] waiting for v$dataguard_stats view to initialize
Jul 31 08:21:37 2013 [2-4] waiting for apply lag on testdbs to fall below 30 seconds
Jul 31 08:22:08 2013 [2-4] current apply lag: 265
Jul 31 08:22:38 2013 [2-4] current apply lag: 295
Jul 31 08:23:08 2013 [2-4] current apply lag: 325
Jul 31 08:23:38 2013 [2-4] current apply lag: 355
Jul 31 08:36:40 2013 [2-4] apply lag is now less than 30 seconds

NOTE: Database testdbs is now ready to be upgraded.  This script has left the
      database open in case you want to perform any further tasks before
      upgrading the database.  Once the upgrade is complete, the database must
      opened in READ WRITE mode before this script can be called to resume the
      rolling upgrade.

NOTE: If testdbs was previously a RAC database that was disabled, it may be
      reverted back to a RAC database upon completion of the rdbms upgrade.
      This can be accomplished by performing the following steps:

          1) On instance testdb, set the cluster_database parameter to TRUE.
          eg: SQL> alter system set cluster_database=true scope=spfile;

          2) Shutdown instance testdb.
          eg: SQL> shutdown abort;

          3) Startup and open all instances for database testdbs.
          eg: srvctl start database -d testdbs

 

If we connect to the standby database we can now see that the role has been changed from PHYSICAL STANDBY to LOGICAL STANDBY

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
LOGICAL STANDBY

 

Now start the 12c database upgrade on the standby database. Have a look a this post which will discuss the < href="http://gavinsoorma.com/2013/07/12c-database-upgrade-11-2-0-3-to-12-1-0-1-upgrade-using-dbua/">upgrade to 12.1.01.0 using DBUA .

Note that users are still connected to the primary database and it is business as usual

Make some changes to the Primary database while the standby database upgrade is in progress.

SQL> update customers set cust_city=’Dubai’ where rownum < 10001;

10000 rows updated.

SQL> commit;

Commit complete.

SQL> create table mycustomers as select * from customers;

Table created.

 

After the 12c upgrade is completed, we need to update the static entry we made in the listener.ora providing the location of the 12c database Oracle Home and then reload the listener.

For example this is the change we made in the listener.ora

SID_LIST_LISTENER12C =
(SID_LIST =
(SID_DESC =
(GLOBAL_DBNAME = testdbs)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
(SID_NAME = testdb)
)
)

 

After the upgrade we now connect to the transient Logical Standby database which is now running in 12c and run the ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE command. Ensure the database is running in READ WRITE mode.

[oracle@kens-orasql-001-dev admin]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Thu Aug 1 07:12:41 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> ALTER DATABASE START LOGICAL STANDBY APPLY IMMEDIATE;

Database altered.

SQL>

SQL> select open_mode from v$database;

OPEN_MODE
--------------------
READ WRITE

 

Now run the physru script again - Execution Two

[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Aug 01 08:06:03 2013 [0-1] Identifying rdbms software version
Aug 01 08:06:03 2013 [0-1] database testdb is at version 11.2.0.3.0
Aug 01 08:06:03 2013 [0-1] database testdbs is at version 12.1.0.1.0
Aug 01 08:06:04 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Aug 01 08:06:04 2013 [0-1] verifying available flashback restore points
Aug 01 08:06:04 2013 [0-1] verifying DG Broker is disabled
Aug 01 08:06:05 2013 [0-1] looking up prior execution history
Aug 01 08:06:05 2013 [0-1] last completed stage [2-4] using script version 0001
Aug 01 08:06:05 2013 [0-1] resuming execution of script

### Stage 3: Validate upgraded transient logical standby
Aug 01 08:06:05 2013 [3-1] database testdbs is no longer in OPEN MIGRATE mode
Aug 01 08:06:05 2013 [3-1] database testdbs is at version 12.1.0.1.0

### Stage 4: Switch the transient logical standby to be the new primary
Aug 01 08:06:06 2013 [4-1] waiting for testdbs to catch up (this could take a while)
Aug 01 08:06:07 2013 [4-1] waiting for v$dataguard_stats view to initialize
Aug 01 08:06:07 2013 [4-1] waiting for apply lag on testdbs to fall below 30 seconds
Aug 01 08:06:07 2013 [4-1] apply lag is now less than 30 seconds
Aug 01 08:06:07 2013 [4-2] switching testdb to become a logical standby
Aug 01 08:06:13 2013 [4-2] testdb is now a logical standby
Aug 01 08:06:13 2013 [4-3] waiting for standby testdbs to process end-of-redo from primary
Aug 01 08:06:14 2013 [4-4] switching testdbs to become the new primary
Aug 01 08:06:18 2013 [4-4] testdbs is now the new primary

### Stage 5: Flashback former primary to pre-upgrade restore point and convert to physical
Aug 01 08:06:19 2013 [5-1] shutting down database testdb
Aug 01 08:06:28 2013 [5-1] mounting database testdb
Aug 01 08:06:34 2013 [5-2] flashing back database testdb to restore point PRU_0000_0001
Aug 01 08:06:37 2013 [5-3] converting testdb into physical standby
Aug 01 08:06:39 2013 [5-4] shutting down database testdb

NOTE: Database testdb has been shutdown, and is now ready to be started
      using the newer version Oracle binary.  This script requires the
      database to be mounted (on all active instances, if RAC) before calling
      this script to resume the rolling upgrade.

 

The transient logical standby database has now been converted to a data guard Primary database

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

 

We now prepare the Original Primary database for the upgrade to 12c. Application is now running from the Standby site.

Change static listener.ora entry to point to 12c Oracle Home ( or we can create a new 12c listener in addition to the 11g one) and then reload the listener

(SID_DESC=
(SID_NAME=testdb)
(ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
(GLOBAL_DBNAME=testdb)
)

Copy spfile, init.ora , password file for TESTDB from 11g Oracle Home to 12c Oracle Home.

Copy the tnsnames.ora file from 11g $ORACLE_HOME/network/admin to 12c $ORACLE_HOME/network/admin

Change /etc/oratab entry for TESTDB to point to new Oracle 12c home

Mount the TESTDB database (now standby database) from the new Oracle 12c

Connect to both databases TESTDB and TESTDBS and ensure that the values for the parameters ‘log_archive_dest_state_1′ and ‘log_archive_dest_state_2′ are both set to ENABLE

 

Now the third and final execution of the physru script!

Application still connected to TESTDBS and database changes are being performed

SQL> update mycustomers set cust_city=’Timbuktu’;

55500 rows updated.

SQL> commit;

Commit complete.

[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Aug 01 08:34:04 2013 [0-1] Identifying rdbms software version
Aug 01 08:34:04 2013 [0-1] database testdb is at version 12.1.0.1.0
Aug 01 08:34:04 2013 [0-1] database testdbs is at version 12.1.0.1.0
Aug 01 08:34:05 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Aug 01 08:34:06 2013 [0-1] verifying available flashback restore points
Aug 01 08:34:06 2013 [0-1] verifying DG Broker is disabled
Aug 01 08:34:06 2013 [0-1] looking up prior execution history
Aug 01 08:34:07 2013 [0-1] last completed stage [5-4] using script version 0001
Aug 01 08:34:07 2013 [0-1] resuming execution of script

### Stage 6: Run media recovery through upgrade redo
Aug 01 08:34:08 2013 [6-1] upgrade redo region identified as scn range [1306630, 3089888]
Aug 01 08:34:08 2013 [6-1] starting media recovery on testdb
Aug 01 08:34:14 2013 [6-1] confirming media recovery is running
Aug 01 08:34:15 2013 [6-1] waiting for media recovery to initialize v$recovery_progress
Aug 01 08:42:49 2013 [6-1] monitoring media recovery's progress
Aug 01 08:42:49 2013 [6-2] last applied scn 1295902 is approaching upgrade redo start scn 1306630
Aug 01 08:47:23 2013 [6-3] recovery of upgrade redo at 01% - estimated complete at Aug 01 12:17:43
Aug 01 08:47:39 2013 [6-3] recovery of upgrade redo at 03% - estimated complete at Aug 01 10:49:51
Aug 01 08:47:54 2013 [6-3] recovery of upgrade redo at 04% - estimated complete at Aug 01 10:28:12
Aug 01 08:48:09 2013 [6-3] recovery of upgrade redo at 08% - estimated complete at Aug 01 09:46:20
Aug 01 08:48:24 2013 [6-3] recovery of upgrade redo at 11% - estimated complete at Aug 01 09:29:23
Aug 01 08:48:40 2013 [6-3] recovery of upgrade redo at 13% - estimated complete at Aug 01 09:27:03
Aug 01 08:49:10 2013 [6-3] recovery of upgrade redo at 15% - estimated complete at Aug 01 09:23:47
Aug 01 08:49:26 2013 [6-3] recovery of upgrade redo at 18% - estimated complete at Aug 01 09:18:34
Aug 01 08:49:41 2013 [6-3] recovery of upgrade redo at 21% - estimated complete at Aug 01 09:14:18
Aug 01 08:49:56 2013 [6-3] recovery of upgrade redo at 23% - estimated complete at Aug 01 09:13:24
Aug 01 08:50:11 2013 [6-3] recovery of upgrade redo at 24% - estimated complete at Aug 01 09:12:35
Aug 01 08:50:27 2013 [6-3] recovery of upgrade redo at 26% - estimated complete at Aug 01 09:11:22
Aug 01 08:50:42 2013 [6-3] recovery of upgrade redo at 30% - estimated complete at Aug 01 09:09:03
Aug 01 08:50:57 2013 [6-3] recovery of upgrade redo at 32% - estimated complete at Aug 01 09:07:51
Aug 01 08:51:12 2013 [6-3] recovery of upgrade redo at 36% - estimated complete at Aug 01 09:06:05
Aug 01 08:51:28 2013 [6-3] recovery of upgrade redo at 40% - estimated complete at Aug 01 09:04:32
Aug 01 08:51:43 2013 [6-3] recovery of upgrade redo at 41% - estimated complete at Aug 01 09:04:37
Aug 01 08:51:58 2013 [6-3] recovery of upgrade redo at 43% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:14 2013 [6-3] recovery of upgrade redo at 44% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:29 2013 [6-3] recovery of upgrade redo at 47% - estimated complete at Aug 01 09:03:15
Aug 01 08:52:44 2013 [6-3] recovery of upgrade redo at 50% - estimated complete at Aug 01 09:02:46
Aug 01 08:53:00 2013 [6-3] recovery of upgrade redo at 55% - estimated complete at Aug 01 09:01:05
Aug 01 08:53:15 2013 [6-3] recovery of upgrade redo at 75% - estimated complete at Aug 01 08:56:40
Aug 01 08:53:30 2013 [6-3] recovery of upgrade redo at 79% - estimated complete at Aug 01 08:56:19
Aug 01 08:53:45 2013 [6-3] recovery of upgrade redo at 82% - estimated complete at Aug 01 08:56:11
Aug 01 08:54:01 2013 [6-3] recovery of upgrade redo at 84% - estimated complete at Aug 01 08:56:09
Aug 01 08:54:16 2013 [6-3] recovery of upgrade redo at 86% - estimated complete at Aug 01 08:56:06
Aug 01 08:54:31 2013 [6-3] recovery of upgrade redo at 88% - estimated complete at Aug 01 08:56:03
Aug 01 08:54:46 2013 [6-3] recovery of upgrade redo at 90% - estimated complete at Aug 01 08:56:04
Aug 01 08:55:02 2013 [6-4] media recovery has finished recovering through upgrade

[oracle@kens-orasql-001-test ~]$ ./physru SYS testdb testdbs testdb testdbs 12.1.0.1.0
Please enter the sysdba password:
### Initialize script to either start over or resume execution
Aug 01 08:34:04 2013 [0-1] Identifying rdbms software version
Aug 01 08:34:04 2013 [0-1] database testdb is at version 12.1.0.1.0
Aug 01 08:34:04 2013 [0-1] database testdbs is at version 12.1.0.1.0
Aug 01 08:34:05 2013 [0-1] verifying flashback database is enabled at testdb and testdbs
Aug 01 08:34:06 2013 [0-1] verifying available flashback restore points
Aug 01 08:34:06 2013 [0-1] verifying DG Broker is disabled
Aug 01 08:34:06 2013 [0-1] looking up prior execution history
Aug 01 08:34:07 2013 [0-1] last completed stage [5-4] using script version 0001
Aug 01 08:34:07 2013 [0-1] resuming execution of script

### Stage 6: Run media recovery through upgrade redo
Aug 01 08:34:08 2013 [6-1] upgrade redo region identified as scn range [1306630, 3089888]
Aug 01 08:34:08 2013 [6-1] starting media recovery on testdb
Aug 01 08:34:14 2013 [6-1] confirming media recovery is running
Aug 01 08:34:15 2013 [6-1] waiting for media recovery to initialize v$recovery_progress
Aug 01 08:42:49 2013 [6-1] monitoring media recovery's progress
Aug 01 08:42:49 2013 [6-2] last applied scn 1295902 is approaching upgrade redo start scn 1306630
Aug 01 08:47:23 2013 [6-3] recovery of upgrade redo at 01% - estimated complete at Aug 01 12:17:43
Aug 01 08:47:39 2013 [6-3] recovery of upgrade redo at 03% - estimated complete at Aug 01 10:49:51
Aug 01 08:47:54 2013 [6-3] recovery of upgrade redo at 04% - estimated complete at Aug 01 10:28:12
Aug 01 08:48:09 2013 [6-3] recovery of upgrade redo at 08% - estimated complete at Aug 01 09:46:20
Aug 01 08:48:24 2013 [6-3] recovery of upgrade redo at 11% - estimated complete at Aug 01 09:29:23
Aug 01 08:48:40 2013 [6-3] recovery of upgrade redo at 13% - estimated complete at Aug 01 09:27:03
Aug 01 08:49:10 2013 [6-3] recovery of upgrade redo at 15% - estimated complete at Aug 01 09:23:47
Aug 01 08:49:26 2013 [6-3] recovery of upgrade redo at 18% - estimated complete at Aug 01 09:18:34
Aug 01 08:49:41 2013 [6-3] recovery of upgrade redo at 21% - estimated complete at Aug 01 09:14:18
Aug 01 08:49:56 2013 [6-3] recovery of upgrade redo at 23% - estimated complete at Aug 01 09:13:24
Aug 01 08:50:11 2013 [6-3] recovery of upgrade redo at 24% - estimated complete at Aug 01 09:12:35
Aug 01 08:50:27 2013 [6-3] recovery of upgrade redo at 26% - estimated complete at Aug 01 09:11:22
Aug 01 08:50:42 2013 [6-3] recovery of upgrade redo at 30% - estimated complete at Aug 01 09:09:03
Aug 01 08:50:57 2013 [6-3] recovery of upgrade redo at 32% - estimated complete at Aug 01 09:07:51
Aug 01 08:51:12 2013 [6-3] recovery of upgrade redo at 36% - estimated complete at Aug 01 09:06:05
Aug 01 08:51:28 2013 [6-3] recovery of upgrade redo at 40% - estimated complete at Aug 01 09:04:32
Aug 01 08:51:43 2013 [6-3] recovery of upgrade redo at 41% - estimated complete at Aug 01 09:04:37
Aug 01 08:51:58 2013 [6-3] recovery of upgrade redo at 43% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:14 2013 [6-3] recovery of upgrade redo at 44% - estimated complete at Aug 01 09:03:58
Aug 01 08:52:29 2013 [6-3] recovery of upgrade redo at 47% - estimated complete at Aug 01 09:03:15
Aug 01 08:52:44 2013 [6-3] recovery of upgrade redo at 50% - estimated complete at Aug 01 09:02:46
Aug 01 08:53:00 2013 [6-3] recovery of upgrade redo at 55% - estimated complete at Aug 01 09:01:05
Aug 01 08:53:15 2013 [6-3] recovery of upgrade redo at 75% - estimated complete at Aug 01 08:56:40
Aug 01 08:53:30 2013 [6-3] recovery of upgrade redo at 79% - estimated complete at Aug 01 08:56:19
Aug 01 08:53:45 2013 [6-3] recovery of upgrade redo at 82% - estimated complete at Aug 01 08:56:11
Aug 01 08:54:01 2013 [6-3] recovery of upgrade redo at 84% - estimated complete at Aug 01 08:56:09
Aug 01 08:54:16 2013 [6-3] recovery of upgrade redo at 86% - estimated complete at Aug 01 08:56:06
Aug 01 08:54:31 2013 [6-3] recovery of upgrade redo at 88% - estimated complete at Aug 01 08:56:03
Aug 01 08:54:46 2013 [6-3] recovery of upgrade redo at 90% - estimated complete at Aug 01 08:56:04
Aug 01 08:55:02 2013 [6-4] media recovery has finished recovering through upgrade
### Stage 7: Switch back to the original roles prior to the rolling upgrade

NOTE: At this point, you have the option to perform a switchover
     which will restore testdb back to a primary database and
     testdbs back to a physical standby database.  If you answer 'n'
     to the question below, testdb will remain a physical standby
     database and testdbs will remain a primary database.

### Stage 7: Switch back to the original roles prior to the rolling upgrade

NOTE: At this point, you have the option to perform a switchover
     which will restore testdb back to a primary database and
     testdbs back to a physical standby database.  If you answer 'n'
     to the question below, testdb will remain a physical standby
     database and testdbs will remain a primary database.

Do you want to perform a switchover? (y/n): y

Aug 01 08:55:42 2013 [7-1] continuing
Aug 01 08:55:44 2013 [7-2] waiting for v$dataguard_stats view to initialize
Aug 01 08:55:44 2013 [7-2] waiting for apply lag on testdb to fall below 30 seconds
Aug 01 08:55:44 2013 [7-2] apply lag is now less than 30 seconds
Aug 01 08:55:45 2013 [7-3] switching testdbs to become a physical standby
Aug 01 08:55:48 2013 [7-3] testdbs is now a physical standby
Aug 01 08:55:48 2013 [7-3] shutting down database testdbs
Aug 01 08:55:49 2013 [7-3] mounting database testdbs
Aug 01 08:55:57 2013 [7-4] waiting for standby testdb to process end-of-redo from primary
Aug 01 08:55:59 2013 [7-5] switching testdb to become the new primary
Aug 01 08:55:59 2013 [7-5] testdb is now the new primary
Aug 01 08:55:59 2013 [7-5] opening database testdb
Aug 01 08:56:05 2013 [7-6] starting media recovery on testdbs
Aug 01 08:56:11 2013 [7-6] confirming media recovery is running

### Stage 8: Statistics
script start time:                                           31-Jul-13 07:20:51
script finish time:                                          01-Aug-13 08:07:09
total script execution time:                                       +01 00:46:18
wait time for user upgrade:                                        +00 23:28:40
active script execution time:                                      +00 01:17:38
transient logical creation start time:                       31-Jul-13 07:25:18
transient logical creation finish time:                      31-Jul-13 07:25:48
primary to logical switchover start time:                    01-Aug-13 07:17:03
logical to primary switchover finish time:                   01-Aug-13 07:17:15
primary services offline for:                                      +00 00:00:12
total time former primary in physical role:                        +00 00:48:56
time to reach upgrade redo:                                        +00 00:00:16
time to recover upgrade redo:                                      +00 00:11:56
primary to physical switchover start time:                   01-Aug-13 08:06:36
physical to primary switchover finish time:                  01-Aug-13 08:06:59
primary services offline for:                                      +00 00:00:23

SUCCESS: The physical rolling upgrade is complete

if we look at the statistics above, the key point to note is how long the application or database was down. 

In this test I had started the rolling upgrade on one day and then continued it the next day. That accounts for the 23 odd hours wait time for user upgrade figure.

But if we see the actual downtime which was over the two separate switchovers, one was for 12 seconds and the other was for 23 seconds which gives us a total actual downtime figure of 35 seconds.

Now connect to the original primary database and check if the database role is what it originally was

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
PRIMARY

 

Check last change made has been applied

 

SQL> select distinct cust_city from mycustomers;

CUST_CITY
------------------------------
Timbuktu

 

Lastly, shutdown standby and start managed recovery

 

SQL> shutdown immediate;
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area 801701888 bytes
Fixed Size 2293496 bytes
Variable Size 314573064 bytes
Database Buffers 478150656 bytes
Redo Buffers 6684672 bytes
Database mounted.
Database opened.

SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.

Oracle 12c new feature In-Database Archiving

$
0
0

Very often in our databases we have some very large tables which have a lot of historical and legacy data and the challenge is deciding what is old data and what is current data and even if we do identify the old data we do not need and have moved that data to tape storage, what happens if that data is suddenly required. Getting that data back in the database can be a very expensive and time consuming exercise.

Keeping large volumes of (unnecessary at most times) historical data in the production OLTP database can not only increase the database footprint for backup and recovery but can also have an adverse impact on database performance.

The new 12c Information Life Cycle Management (ILM) feature called In-Database Archiving enables us to overcome the issues stated above by enabling the database to distinguish from active data and ‘older’ in-active data while at the same time storing everything in the same database.

When we enable row archival for a table, a hidden column called ORA_ARCHIVE_STATE column is added to the table and this column is automatically assigned a value of 0 to denote current data and we can decide what data in the table is to be considered as candidates for row archiving and they are assigned the value 1

Once the older and more current data is distinguished, we can archive and compress the older data to reduce the size of the database or move that older data to a cheaper storage tier to reduce cost of storing data.

Let us have a look at an example of using this Oracle 12c new feature called In-Database Archiving

SQL> select count(*) from sales;

  COUNT(*)
----------
    918843

SQL> alter table sales row archival;

Table altered.

SQL> select distinct ora_archive_state from sales;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
0

 

Note – the column ORA_ARCHIVE_STATE is now added to the table SALES and is a hidden column.

We now want to designate all rows in the sales table which belong to the years 1998 and 1999 as old and historical data.

All data after 01-JAN-2000 should be treated as current and active data.

SQL> update sales
  2  set ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1)
  3  where time_id < '01-JAN-00';

426779 rows updated.

 

If we now issue a select * from sales command, we see that only about half the actual number of rows are being returned by the query as Oracle is not returning the rows where the value is 1 for the column ORA_ARCHIVE_STATE

SQL> select distinct ora_archive_state from sales;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
0

SQL> select count(*) from sales;

  COUNT(*)
----------
    492064

 

Now let as assume there is a requirement to view the historical and inactive data as well. At the session level we can set the value for the parameter ROW ARCHIVAL VISIBILITY to ALL

SQL> alter session set row archival visibility=ALL;

Session altered.

SQL> select count(*) from sales;

  COUNT(*)
----------
    918843

SQL> select distinct ora_archive_state from sales;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
1
0

 

So what we can do now is partition the sales table on the ORA_ARCHIVE_STATE column and we can then compress the partitions containing the archive data. The current data will be left in an uncompressed state as it is frequently accessed and we do not want to impact performance.

We can also make those partitions containing the older data read only and exclude them from our regular daily database backups.

Oracle 12c New Feature – Temporal Validity

$
0
0

The Temporal Validity feature adds a time dimension to each row in the table consisting of two date-time columns to denote validity of data. Data that is older or no longer valid or not yet valid can be hidden from queries and only active data returned by queries.

In case of a very large table, just processing a small active row set rather than the entire table can be a significant performance gain.

The Temporal Validity is controlled by the user or application who defines the valid time dimension for the table via the PERIOD FOR caluse

Until now we had data and timestamp columns in table – but they record the transaction time. For example a table called EMP has a column called DOJ and this records the transaction time of when the entry was made in the EMP table for a particular employee.

But suppose we have a case where contracts are offered to potential employees on a particular date and we want to run queries based on this timestamp and not when the DOJ record for that employee is entered in the database

Let us look at an example.

We create a table MYSALES and alter the table adding the valid-time dimension TRACK_TIME via the PERIOD FOR clause as shown below
 

SQL> create table sh.mysales as select * from sh.sales;

Table created.

SQL> select count(*) from sh.mysales;

  COUNT(*)
----------
    918843

SQL> alter table sh.mysales add period for track_time;

Table altered.

 

Oracle will create hidden columns TRACK_TIME, TRACK_TIME_START and TRACK_TIME_END

 

SQL> desc mysales
Name                                      Null?    Type
----------------------------------------- -------- ----------------------------
PROD_ID                                   NOT NULL NUMBER
CUST_ID                                   NOT NULL NUMBER
TIME_ID                                   NOT NULL DATE
CHANNEL_ID                                NOT NULL NUMBER
PROMO_ID                                  NOT NULL NUMBER
QUANTITY_SOLD                             NOT NULL NUMBER(10,2)
AMOUNT_SOLD                               NOT NULL NUMBER(10,2)

SQL> select column_name,data_type from user_tab_cols where table_name='MYSALES' and hidden_column='YES';

COLUMN_NAME                                        DATA_TYPE
-------------------------------------------------- ----------------------------------------
TRACK_TIME                                         NUMBER
TRACK_TIME_END                                     TIMESTAMP(6) WITH TIME ZONE
TRACK_TIME_START                                   TIMESTAMP(6) WITH TIME ZONE

 

We can filter on valid-time columns using the SELECT statement with the PERIOD FOR clause or use the DBMS_FLASHBACK_ARCHIVE procedure.

The DBMS_FLASHBACK_ARCHIVE controls the visibility of the data as at a given time or data which is currently valid within the valid time period at the session level or can also set the visibility to the full table level.

This visibility control applies to not only SELECT statements but also to DML statements
 

SQL>  select count(*) from sh.sales;

  COUNT(*)
----------
    918843

 

Update the table so that only row in the table with timestamp greater than 01-JAN-2000 are considered to be valid or active.
 

SQL> update sh.mysales set track_time_start=sysdate ;

918843 rows updated.

SQL> update sh.mysales set track_time_end=sysdate where time_id < '01-JAN-2000'; 

426779 rows updated. 

SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('CURRENT');

PL/SQL procedure successfully completed.

 

So now when we do a count(*) from the table MYSALES, it only returns the rows which we have deemed to be active or current and not all the rows.
 

SQL> select count(*) from sh.mysales;

  COUNT(*)
----------
    492064

 

Let us now increase the period for the active or valid rows from 01-JAN-2001 to 01-JAN-2001
 

SQL> update sh.mysales set track_time_end=sysdate where time_id < '01-JAN-2001';

659425 rows updated.

 

Now when we issue the same SELECT statement, because fewer rows are considered active, the query now returns 259418 rows as opposed to the earlier 492064 rows
 

SQL>  EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('CURRENT');

PL/SQL procedure successfully completed.

SQL> select count(*) from sh.mysales;

  COUNT(*)
----------
    259418

SQL> EXECUTE DBMS_FLASHBACK_ARCHIVE.enable_at_valid_time('ALL');

PL/SQL procedure successfully completed.

SQL>  select count(*) from sh.mysales;

  COUNT(*)
----------
    918843

Oracle 12c New Feature – Heat Map and Automatic Data Optimization

$
0
0

One of the main goals of an ILM (Information Life Cycle) policy is to reduce the cost of storage, improve performance and access times for both current as well as archived data and retain data long enough so as to satisfy regulatory laws and statutes related to data preservation.

As data grows rather than buy and install additional storage as is normally  the case, organizations can adopt both storage tiering as well as compression tiering models to achieve their ILM goals and policies.

Storage tiering could be related to storing older and dormant data on cheaper low cost storage while retaining the current or the most volatile data on high performance storage.

Using compression, current or OLTP data can be compressed using  lower compression levels and the data which is rarely accessed as well as rarely modified could be subject to higher levels of compression and also moved to low cost storage.

Oracle 12c as a new feature called Heat Map which tracks and marks data even down to the row and block level  as it goes through life cycle changes.

ADO or Automatic Data  Optimization  works with the Heat Map feature and allows us to create policies  at the tablespace, object and even row level which specify conditions to dictate when data will be moved or compressed based on statistics related to the data usage.

Real-time data access statistics are collected in memory in the V$HEAT_MAP_SEGMENT view and then regularly flushed by DBMS_SCHEDULER_JOBS to tables on disk like HEAT_MAP_STAT$ which is presented to the DBA via views like  DBA_HEAT_MAP_SEG_HISTOGRAM and DBA_HEAT_MAP_SEGMENT.

Heat Map is enabled at the instance level by setting the parameter HEAT_MAP to ON.

Let us now look at an example of how we can use Heat Map and ADO to perform compression level tiering of data.

This example is based on the 12c database tutorials which can be found at the Oracle Learning Library site. This site has many good introductory tutorials for anyone wanting to try and work with some of the important new features now available in Oracle 12c .

In this example we will create an ILM policy which will compress the MYOBJECTS table if there have been no modifications performed on the table since the past 30 days.

 So the assumption we are following is that dormant data can be compressed to save storage and since the data is not frequently accessed, compressing the data will not impact the OLTP type performance.

We create a procedure which simulates the passage of time in this example so that the table qualifies for ADO action (compression)  even though actually 30 days have not elapsed.

 

CREATE OR REPLACE PROCEDURE set_stat (object_id      number,
 data_object_id number,
 n_days         number,
 p_ts#            number,
 p_segment_access number)
 as
 begin
 insert into sys.heat_map_stat$
 (obj#,
 dataobj#,
 track_time,
 segment_access,
 ts#)
 values
 (object_id,
 data_object_id,
 sysdate - n_days,
 p_segment_access,
 p_ts# );
 commit;
 end;
 /

 

We then grant execute on this procedure to SCOTT and then turn on the Heat Map option at the instance level.

 SQL> grant execute on set_stat to scott;
 SQL> alter system set heat_map=on scope=both;

After enabling heat map tracking, we set the heat map tracking start time back 30 days to ensure statistics logged after this time are valid and considered by Automatic Data Optimization (ADO).

exec dbms_ilm_admin.set_heat_map_start(start_date => sysdate - 30)

We now connect as SCOTT and populate the table with data.

SQL> create table myobjects as select * from all_objects;
Table created.

SQL> declare
 sql_test clob;
 begin
 for i in 1..5
 loop sql_test := 'insert /*+ append */ into scott.myobjects select * from scott.myobjects';
 execute immediate sql_test;
 commit;
 end loop;
 end;
 /

Note the size of the (uncompressed) table is 320 MB

SQL> select sum(bytes)/1048576 from user_segments where 
    segment_name='MYOBJECTS';

SUM(BYTES)/1048576
 ------------------
 320

SQL> select count(*) from myobjects;

COUNT(*)
 ----------
 2360288

Now verify that heat map tracking collected statistics for the SCOTT.MYOBJECTS table.

SQL> alter session set nls_date_format='dd-mon-yy hh:mi:ss';

Session altered.

select OBJECT_NAME,SEGMENT_WRITE_TIME , SEGMENT_READ_TIME, FULL_SCAN
 FROM dba_heat_map_segment
 WHERE OBJECT_NAME='MYOBJECTS'
 AND OWNER = 'SCOTT';

OBJECT_NAME
 --------------------------------------------------------------------------------
 SEGMENT_WRITE_TIME SEGMENT_READ_TIME  FULL_SCAN
 ------------------ ------------------ ------------------
 MYOBJECTS
 09-sep-13 02:39:09

col "Segment write" format A14
col "Full Scan" format A12
col "Lookup Scan" format a12

 select object_name, track_time "Tracking Time",
 segment_write "Segment write",
 full_scan "Full Scan",
 lookup_scan "Lookup Scan"
 from DBA_HEAT_MAP_SEG_HISTOGRAM
 where object_name='MYOBJECTS'
 and owner = 'SCOTT';

OBJECT_NAME
 --------------------------------------------------------------------------------
 Tracking Time      Segment write  Full Scan    Lookup Scan
 ------------------ -------------- ------------ ------------
 MYOBJECTS
 09-sep-13 02:40:14 NO             YES          NO

Confirm that the table is not compressed currently

SQL> select compression, compress_for from dba_tables where table_name = 'MYOBJECTS' and owner = 'SCOTT';

COMPRESS COMPRESS_FOR
 -------- ------------------------------
 DISABLED

Add a compression policy on SCOTT.MYOBJECTS table.

ALTER TABLE scott.myobjects ILM ADD POLICY ROW STORE 
COMPRESS ADVANCED SEGMENT AFTER 30 DAYS OF NO MODIFICATION;

Verify that the policy has been added.

 select policy_name, action_type, scope, compression_level,
 condition_type, condition_days
 from   user_ilmdatamovementpolicies
 order by policy_name;

POLICY_NAME
 --------------------------------------------------------------------------------
 ACTION_TYPE SCOPE   COMPRESSION_LEVEL              CONDITION_TYPE
 ----------- ------- ------------------------------ ----------------------
 CONDITION_DAYS
 --------------
 P3
 COMPRESSION SEGMENT ADVANCED                       LAST MODIFICATION TIME
 30

 select policy_name, object_name, inherited_from, enabled
 from user_ilmobjects;

POLICY_NAME
 --------------------------------------------------------------------------------
 OBJECT_NAME
 --------------------------------------------------------------------------------
 INHERITED_FROM       ENA
 -------------------- ---
 P3
 MYOBJECTS
 POLICY NOT INHERITED YES

 select * from user_ilmpolicies;
POLICY_NAME
 --------------------------------------------------------------------------------
 POLICY_TYPE   TABLESPACE                     ENABLED
 ------------- ------------------------------ -------
 P3
 DATA MOVEMENT                                YES

We now want to simulate a situation where 30 days have passed without any modification to the MYOBJECTS table.
Using the set_stat procedure we had created earlier, we are turning the heat map statistics clock back by 30 days.
As SYS:

alter session set nls_date_format='dd-mon-yy hh:mi:ss';

declare
 v_obj# number;
 v_dataobj# number;
 v_ts#      number;
 begin
 select object_id, data_object_id into v_obj#, v_dataobj#
 from all_objects
 where object_name = 'MYOBJECTS'
 and owner = 'SCOTT';
 select ts# into v_ts#
 from sys.ts$ a,
 dba_segments b
 where  a.name = b.tablespace_name
 and  b.segment_name = 'MYOBJECTS';
 commit;
 sys.set_stat
 (object_id         => v_obj#,
 data_object_id    => v_dataobj#,
 n_days            => 30,
 p_ts#             => v_ts#,
 p_segment_access  => 1);
 end;
 /

At this stage we need to flush the in memory heat map data to the persistent tables and views on disk.

A rough and dirty way to do this is to bounce the database rather than wait for the MMON background process to kick in and do the job.

After restarting the database we now see that the heap map; statistics are showing that the table was last accessed more than a month ago in August.

select object_name, segment_write_time
 from dba_heat_map_segment
 where object_name='MYOBJECTS';

 OBJECT_NAME
 --------------------------------------------------------------------------------
 SEGMENT_W
 ---------
 MYOBJECTS
 10-AUG-13

Normally the ADO related jobs are run in the maintenance window. Rather than wait for this window to occur we trigger it manually via the DBMS_ILM.EXECUTE_ILM procedure called from this PL/SQL block.

SQL> conn scott/tiger
 Connected.
declare
v_executionid number;
begin
dbms_ilm.execute_ILM (ILM_SCOPE      => dbms_ilm.SCOPE_SCHEMA,
                      execution_mode => dbms_ilm.ilm_execution_offline,
                      task_id        => v_executionid);
end;
/

We can now query the USER_ILMTASKS view and see that the ADO job has been started and the ILM policy we have defined has been earmarked for execution.

select task_id, start_time as start_time from user_ilmtasks;
TASK_ID
----------
START_TIME
---------------------------------------------------------------------------
7
09-SEP-13 02.50.24.895834 PM
select task_id, job_name, job_state, completion_time completion from user_ilmresults;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
8
ILMJOB1116
JOB CREATED
select task_id, policy_name, object_name, selected_for_execution, job_name
from user_ilmevaluationdetails
where task_id=11;
TASK_ID
----------
POLICY_NAME
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
SELECTED_FOR_EXECUTION
------------------------------------------
JOB_NAME
--------------------------------------------------------------------------------
8
P3
MYOBJECTS
TASK_ID
----------
POLICY_NAME
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
SELECTED_FOR_EXECUTION
------------------------------------------
JOB_NAME
--------------------------------------------------------------------------------
SELECTED FOR EXECUTION
ILMJOB1116

We can now see that the table has been compressed and the ADO job has been completed.

select task_id, job_name, job_state, completion_time completion from user_ilmresults;
TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE
-----------------------------------
COMPLETION
---------------------------------------------------------------------------
11
ILMJOB1118
COMPLETED SUCCESSFULLY
09-SEP-13 03.26.44.408970 PM

 select compression, compress_for FROM user_tables where table_name = 'MYOBJECTS';
COMPRESS COMPRESS_FOR
-------- ------------------------------
ENABLED  ADVANCED

After the compression the table has been reduced in size from 320 MB to 60 MB!

select sum(bytes)/1048576 from user_segments where segment_name='MYOBJECTS';
SUM(BYTES)/1048576
------------------
60

We can remove the ILM policy on the table if required.

alter table scott.myobjects ilm delete_all;

Oracle 12c New Feature Heat Map and Automatic Data Optimization (ADO) – Part 2

$
0
0

In a previous post Oracle 12c New Feature Heat Map and ADO we had looked at the 12c new feature called Heat Map and ADO (Automatic Data Optimization) and how we were using compression level tiering policy to compress and relocate dormant or less active data to a low cost storage tier.

In addition to compression, we can also use the storage tiering policy where we can create ILM policies in 12c to move data at the segment level (table or partition) to different storage tiers – for example move old data from the table to a different tablespace which could be based on low cost storage which is considered adequate by the business to store the older and less active data.

In this example we have two tablespaces – one located on a high performance disk where we want to store all our current and volatile data and then we have another tablespace located on low cost storage to store all our older and archived data.

So in this case we create two tablespaces of 25 MB each and we call them HIGH_PERF_TBS and LOW_COST_TBS.

We then create a range partitioned table in the SCOTT schema called MYOBJECTS.

create table myobjects
(owner varchar2(30),
object_name varchar2(30),
object_type varchar2(25),
created date)
Partition by range (created)
(partition p_old values less than (to_date ('01-JUL-2013','DD-MON-YYYY'))
 tablespace high_perf_tbs,
partition p_new values less than (maxvalue)) 
tablespace high_perf_tbs;

We then populate this table with some test data.

insert into myobjects 
(select owner,object_name,object_type,created from all_objects);

73765 rows created.

SQL> commit;

Commit complete.

We then check the free space in the tablespace and see than the HIGH_PERF_TBS is mainly used as the partitioned table has been initially created in that tablespace. The tablespace is about 68% full.

SELECT /* + RULE */ df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
and fs.tablespace_name in ('HIGH_PERF_TBS','LOW_COST_TBS')
GROUP BY df.tablespace_name,df.bytes
Order by 4;

Tablespace                      Size (MB)  Free (MB)     % Free     % Used
------------------------------ ---------- ---------- ---------- ----------
LOW_COST_TBS                           25         24         96          4
HIGH_PERF_TBS                          25          8         32         68

By default the ILM threshold to move objects is set to 85% full for a tablespace and it will continue moving objects until the tablespace is back to less than 75% full.

We can check these default thresholds via the query shown below.

col name format A20
col value format 9999

select * from dba_ilmparameters;


NAME                 VALUE
-------------------- -----
ENABLED                  1
JOB LIMIT               10
EXECUTION MODE           3
EXECUTION INTERVAL      15
TBS PERCENT USED        85
TBS PERCENT FREE        25

Now create a storage tiering policy for the MYOBJECTS partitioned table.

Using the default threshold values, we want to create an ILM policy where if the tablespace gets more than 85% full, we want to move the partition with the older data P_OLD to a tablespace hosted on low cost storage LOW_COST_TBS while retaining the partition with current data in the existing tablespace.

ALTER TABLE myobjects MODIFY PARTITION p_old ILM ADD POLICY TIER TO low_cost_tbs;

Verify that the storage ILM policy has been added.

select  cast(policy_name as varchar2(30)) policy_name,
  action_type, scope, compression_level, cast(tier_tablespace as 
  varchar2(30)) tier_tbs, condition_type, condition_days
from  user_ilmdatamovementpolicies
order by policy_name;

POLICY_NAME                    ACTION_TYPE SCOPE
------------------------------ ----------- -------
COMPRESSION_LEVEL              TIER_TBS
------------------------------ ------------------------------
CONDITION_TYPE         CONDITION_DAYS
---------------------- --------------
P25                            STORAGE     SEGMENT
                               LOW_COST_TBS
                                    0


select * from user_ilmobjects where object_type='TABLE PARTITION'


POLICY_NAME
--------------------------------------------------------------------------------
OBJECT_OWNER
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
SUBOBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE        INHERITED_FROM       ENA
------------------ -------------------- ---
P25
SCOTT
MYOBJECTS

POLICY_NAME
--------------------------------------------------------------------------------
OBJECT_OWNER
--------------------------------------------------------------------------------
OBJECT_NAME
--------------------------------------------------------------------------------
SUBOBJECT_NAME
--------------------------------------------------------------------------------
OBJECT_TYPE        INHERITED_FROM       ENA
------------------ -------------------- ---
P_OLD
TABLE PARTITION    POLICY NOT INHERITED YES

We now insert some more data into the table which will fill the tablespace more than the default 85% threshold and trigger the ILM storage policy to execute which will move the P_OLD partition to the tablespace LOW_COST_TBS.

insert into myobjects 
(select owner,object_name,object_type,created from all_objects);

For the purposes of this tutorial, we cannot wait for the maintenance window to open that will trigger the automatic data optimization policies jobs.

Instead, you are going to use the following PL/SQL block and trigger it as the table owner.

declare
v_executionid number;
begin
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,
            execution_mode => dbms_ilm.ilm_execution_offline,
            task_id   => v_executionid);
end;
/

select task_id, job_name, job_state,
to_char(completion_time,’dd-MON-yyyy’)completion
from user_ilmresults;

TASK_ID
———-
JOB_NAME
——————————————————————————–
JOB_STATE COMPLETION
———————————– ———–
97
ILMJOB1204
COMPLETED SUCCESSFULLY 10-SEP-2013

select SELECTED_FOR_EXECUTION from user_ilmevaluationdetails
where task_id=97;

SELECTED_FOR_EXECUTION
——————————————
SELECTED FOR EXECUTION

Now check that the partition P_OLD has been relocated.

select partition_name,tablespace_name
from user_tab_partitions where
table_name='MYOBJECTS';


PARTITION_ TABLESPACE_NAME
---------- ------------------------------
P_NEW      HIGH_PERF_TBS
P_OLD      LOW_COST_TBS

If we now check the free space in the two tablespaces, we can see that the used space in the original tablespace HIGH_PERF_TBS has come down and the used space in the tablespace LOW_COST_TBS has increased after the partition got relocated.

The task status also shows that the job has been completed successfully.

SELECT /* + RULE */ df.tablespace_name "Tablespace",
df.bytes / (1024 * 1024) "Size (MB)",
SUM(fs.bytes) / (1024 * 1024) "Free (MB)",
Nvl(Round(SUM(fs.bytes) * 100 / df.bytes),1) "% Free",
Round((df.bytes - SUM(fs.bytes)) * 100 / df.bytes) "% Used"
FROM dba_free_space fs,
(SELECT tablespace_name,SUM(bytes) bytes
FROM dba_data_files
GROUP BY tablespace_name) df
WHERE fs.tablespace_name (+) = df.tablespace_name
and fs.tablespace_name in ('HIGH_PERF_TBS','LOW_COST_TBS')
GROUP BY df.tablespace_name,df.bytes
Order by 4;

Tablespace                      Size (MB)  Free (MB)     % Free     % Used
------------------------------ ---------- ---------- ---------- ----------
LOW_COST_TBS                           25          8         32         68
HIGH_PERF_TBS                          25         16         64         36



 select task_id, job_name, job_state, 
 to_char(completion_time,'dd-MON-yyyy') completion
 from user_ilmresults where task_id=97;


   TASK_ID
----------
JOB_NAME
--------------------------------------------------------------------------------
JOB_STATE                           COMPLETION
----------------------------------- -----------
        97
ILMJOB1204
COMPLETED SUCCESSFULLY              10-SEP-2013

Creating an Oracle 12c Data Guard Active Standby Database

$
0
0

This note examines how to create an Oracle 12.1.0 physical standby Active Data Guard database using the RMAN DUPLICATE FROM ACTIVE command.

We will be creating the data guard configuration in a 12c Container Database.

Remember – in 12c Data Guard is set up at the Container level and not the individual Pluggable database level as the redo log files only belong to the Container database and the individual pluggable databases do not have their own online redo log files.

In my next post we will examine how to unplug a pluggable database from a Container database not having Data Guard set up and how easy it is to provide high availability for a pluggable database by just plugging it into a container database which has Data Guard configured.

The platform is Linux 64 bit OEL 5.9 and the primary database db_unique_name is CONDB1 and the db_unique_name of the Active Standby database is CONDB1_DR.

Let us look at the steps involved.

 

On Primary

SQL> alter database force logging;

Database altered.

On Standby

Create the required directory structure

$ mkdir -p  /u01/app/oracle/admin/condb1/adump

$ mkdir -p /u01/app/oracle/oradata/condb1/pdb1/

$ mkdir -p /u01/app/oracle/oradata/condb1/pdbseed

$ mkdir -p /u01/app/oracle/fast_recovery_area/condb1/

$ mkdir -p /u01/app/oracle/oradata/condb1/pdbseed/

 

Copy the password file from primary to standby

$ scp -rp orapwcondb1* oracle@orasql-001-test:/u01/app/oracle/product/12.1.0/dbhome_1/dbs
oracle@orasql-001-test's password:

orapwcondb1                                                                                                                            
100% 7680     7.5KB/s   00:00

 

On Standby

Add a static entry in the listener.ora for condb1_dr

LISTENER12C =
  (DESCRIPTION_LIST =
    (DESCRIPTION =
      (ADDRESS = (PROTOCOL = TCP)(HOST = orasql-001-test.corporate.domain)(PORT = 1523))
      (ADDRESS = (PROTOCOL = IPC)(KEY = EXTPROC1523))
    )
  )

SID_LIST_LISTENER12C =
  (SID_LIST =
 (SID_DESC =
      (GLOBAL_DBNAME = condb1_dr)
      (ORACLE_HOME = /u01/app/oracle/product/12.1.0/dbhome_1)
      (SID_NAME = condb1)
    )
 )

Reload the listener

$ lsnrctl reload listener12c

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 06-NOV-2013 10:49:46

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=orasql-001-test.corporate.domain)(PORT=1523)))
The command completed successfully

Add an entry in the initcondb1.ora – just one line with the entry for db_name

$ cat initcondb1.ora
*.db_name=condb1

Add an entry in the oratab file

condb1:/u01/app/oracle/product/12.1.0/dbhome_1:N

Add the tns aliases on both the primary as well as standby site

 

On Primary

condb1_dr =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orasql-001-test.corporate.domain)(PORT = 1523))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = condb1_dr)
    )
  )

On Standby

Since we are using a non-standard port for the listener we need to add an entry in the tnsnames.ora file for the LOCAL_LISTENER database parameter.

LISTENER_CONDB1 =
  (ADDRESS = (PROTOCOL = TCP)(HOST = orasql-001-test.corporate.domain)(PORT = 1523))

CONDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orasql-001-dev.corporate.domain)(PORT = 1525))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = condb1)
    )
  )

CONDB1_DR =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = orasql-001-test.corporate.domain)(PORT = 1523))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = condb1_dr)
    )
  )

On Standby

Start the Standby instance in NOMOUNT mode

$ . oraenv
ORACLE_SID = [condb1] ? condb1
The Oracle base has been set to /u01/app/oracle

[oracle@orasql-001-test admin]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Wed Nov 6 10:57:42 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:
Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area  229683200 bytes
Fixed Size                  2286800 bytes
Variable Size             171969328 bytes
Database Buffers           50331648 bytes
Redo Buffers                5095424 bytes

 

On Primary

Connect to Primary and auxiliary connection to Standby

$ rman target sys/syspassword auxiliary sys/syspassword@condb1_dr

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Nov 6 10:58:43 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)
connected to auxiliary database: CONDB1 (not mounted)

This is the command we will run to create the Standby Database.

Note – since the data file names are not being changed on the standby database we need to include the NOFILENAMECHECK

run
{
allocate channel c1 type disk;
allocate channel c2 type disk;
allocate auxiliary channel aux type disk;
duplicate target database for standby from active database nofilenamecheck spfile 
set log_archive_max_processes='8'
set db_unique_name='condb1_dr'
set standby_file_management='AUTO'
set log_archive_config='dg_config=(condb1,condb1_dr)'
set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST valid_for=(all_logfiles,all_roles) db_unique_name=condb1_dr'
set log_Archive_dest_2='service=condb1 async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=condb1';
}

After the RMAN DUPLICATE command completes we now need to add the relevant parameters for the redo log transport on the Primary database.

RMAN> alter system  set standby_file_management='AUTO';

Statement processed

RMAN> alter system set log_archive_config='dg_config=(condb1,condb1_dr)';

Statement processed

RMAN> alter system set log_archive_dest_1='location=USE_DB_RECOVERY_FILE_DEST valid_for=(all_logfiles,all_roles) db_unique_name=condb1';

Statement processed

RMAN> alter system set log_Archive_dest_2='service=condb1_dr async noaffirm reopen=15 valid_for=(all_logfiles,primary_role) db_unique_name=condb1_dr';

Statement processed

We will be running the standby database in Maximum Availability mode, so we need to create the standby redo log files on both the primary as well as standby site.

Since we have 3 online redo log file groups, we need to create (3+1) 4 Standby redo log file groups

On Standby

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo01.log' size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo02.log'  size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo03.log'  size 50m;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo04.log' size 50m;

Database altered.

On Primary

RMAN> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo01.log' size 50m;

Statement processed

RMAN> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo02.log' size 50m;

Statement processed

RMAN> ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo03.log' size 50m;

Statement processed

RMAN>  ALTER DATABASE ADD STANDBY LOGFILE '/u01/app/oracle/oradata/condb1/standby_redo04.log' size 50m;

Statement processed

On Primary change the protection mode

RMAN> alter database set standby database to maximize availability;

Statement processed

Check the status

RMAN> select destination,status from v$archive_dest_status where rownum <3;

DESTINATION
--------------------------------------------------------------------------------
STATUS
---------

VALID

condb1_dr
VALID

Test Redo Apply is working

Connect to the pluggable database PDB1 as SH and create a table called SALES_DR.

Populate it with rows from SALES table in the SH schema.

 

$ sqlplus sh/sh@localhost:1525/pdb1

SQL*Plus: Release 12.1.0.1.0 Production on Wed Nov 6 11:40:26 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sat May 25 2013 04:25:15 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create table sales_dr as select * from sales;

Table created.

On the Standby database, the RMAN script which we ran from the primary database has not opened the database and started managed recovery.

Let us now manually do it.

On Standby

SQL> shutdown immediate;
ORA-01109: database not open

Database dismounted.

SQL> startup;
ORACLE instance started.

Total System Global Area 4275781632 bytes
Fixed Size                  2296576 bytes
Variable Size            2214593792 bytes
Database Buffers         2046820352 bytes
Redo Buffers               12070912 bytes
Database mounted.
Database opened.

SQL> recover managed standby database using current logfile disconnect;
Media recovery complete.

Check the MRP process is running

SQL> !ps -ef |grep mrp
oracle   28800     1  0 11:41 ?        00:00:00 ora_mrp0_condb1

SQL> select process,status,thread#,sequence#,blocks from v$managed_standby where process like '%MRP%';

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCKS
--------- ------------ ---------- ---------- ----------
MRP0      WAIT_FOR_LOG          1         25          0

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           MOUNTED

SQL>  alter pluggable database all open read only;

Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ ONLY

The Pluggable database PDB1 has been opened in READ ONLY mode, but the Container Database is running as an Active Standby
database and applying changes real-time as soon as they are received from the primary even when the Standby Container database and all the associated pluggable databases have been opened in read only mode.

Let us see if the SALES_DR table we had created on the Primary database can be accessed from the active standby database.

On the standby site, connect to the container database PDB1 as SH

[oracle@orasql-001-test condb1]$ sqlplus sh/sh@localhost:1523/pdb1

SQL*Plus: Release 12.1.0.1.0 Production on Wed Nov 6 11:43:40 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Wed Nov 06 2013 11:40:26 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from sales_dr;

  COUNT(*)
----------
    918843

The test is successful and we have created our first Oracle 12c Active Stanbdy database!!

Plugging an Oracle 12c pluggable database into a Data Guard container database

$
0
0

In a previous post we had seen how to setup and create an Oracle 12c Data Guard Physical Standby Database.

Remember Data Guard is set up at the CONTAINER database level and not at the PLUGGABLE database level.

In this example we will see how we can simply unplug a database from a non Data Guard container database and plug it into the container database where Data Guard has been configured and automatically the pluggable database will become part of a highly available environment.

Before 12c if we had 10 databases we needed to setup Data Guard for we would need to go through all the Data Guard setup procedures ten times, have 10 separate Data Guard Broker configurations and so on.

Here in Oracle 12c we setup and configure Data Guard just once at the container database level and as and when we need to have a database to become part of this highly available environment we just plug it to the container database and we are good to go.

So in our previous example we had a container database where we had set up Data Guard called CONDB1 and a pluggable database PDB1.

Now we have another pluggable database PDB2 part of a non Data Guard container database CONDB2 and we need to move it into the existing Data Guard container database CONDB1.

The first thing we need to do is to unplug the pluggable database PDB2 from it’s current container and plug it in to CONDB1.

SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB2                           READ WRITE


SQL> alter pluggable database pdb2 close immediate;

Pluggable database altered.


SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB2                           MOUNTED


SQL> alter pluggable database pdb2 unplug into '/home/oracle/pdb2.xml';

Pluggable database altered.

We now need to copy the pluggable database PDB2 files to the standby site.

[oracle@orasql-001-dev ~]$ cd /u01/app/oracle/oradata/condb2/pdb2/

[oracle@orasql-001-dev pdb2]$ ls -lrt
total 1319140
-rw-r----- 1 oracle oinstall  91234304 Nov  8 08:48 pdb2_temp01.dbf
-rw-r----- 1 oracle oinstall 283123712 Nov  8 09:30 system01.dbf
-rw-r----- 1 oracle oinstall 723525632 Nov  8 09:30 sysaux01.dbf
-rw-r----- 1 oracle oinstall   5251072 Nov  8 09:30 SAMPLE_SCHEMA_users01.dbf
-rw-r----- 1 oracle oinstall 338829312 Nov  8 09:30 example01.dbf


[oracle@orasql-001-dev pdb2]$ cd ..

[oracle@orasql-001-dev condb2]$ scp -rp pdb2 oracle@orasql-001-test:/u01/app/oracle/oradata/condb1
oracle@orasql-001-test's password:
example01.dbf                                                                                                                         100%  323MB  35.9MB/s   00:09
pdb2_temp01.dbf                                                                                                                       100%   87MB  29.0MB/s   00:03
SAMPLE_SCHEMA_users01.dbf                                                                                                             100% 5128KB   5.0MB/s   00:00
system01.dbf                                                                                                                          100%  270MB  38.6MB/s   00:07
sysaux01.dbf

Now plug the unplugged database PDB2 into the container database CONDB1.

Note that since the directory structures in the two container databases is different, when we plug in the database we have to use the FILE_NAME_CONVERT parameter.

[oracle@orasql-001-dev condb1]$ echo $ORACLE_SID
condb1

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT



SQL> create pluggable database pdb2
  2  using '/home/oracle/pdb2.xml'
  3  copy
  4  file_name_convert=('/u01/app/oracle/oradata/condb2/pdb2/','/u01/app/oracle/oradata/condb1/pdb2/');

Pluggable database created.



SQL> select name, open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ WRITE
PDB2                           MOUNTED


SQL> select  PDB_NAME, DBID , CON_ID, STATUS  from CDB_PDBS;
PDB_NAME              DBID     CON_ID STATUS
--------------- ---------- ---------- -------------
PDB1            3338455196          1 NORMAL
PDB$SEED        4073382782          1 NORMAL
PDB2            3897194249          1 NEW

After plugging in the database, open it in read write mode

SQL> conn / as sysdba
Connected.

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database pdb2 open read write;

Pluggable database altered.

SQL> select  PDB_NAME, DBID , CON_ID, STATUS  from CDB_PDBS;

PDB_NAME              DBID     CON_ID STATUS
--------------- ---------- ---------- -------------
PDB1            3338455196          1 NORMAL
PDB$SEED        4073382782          1 NORMAL
PDB2            3897194249          1 NORMAL

When we connect to the standby site, we can see that the PDB2 database is now in a mount state. We can open it in read only mode if this is part of an Active Standby configuration.

SQL> select process,status,thread#,sequence#,blocks from v$managed_standby where process like '%MRP%';

PROCESS   STATUS          THREAD#  SEQUENCE#     BLOCKS
--------- ------------ ---------- ---------- ----------
MRP0      APPLYING_LOG          1         38     102400

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ ONLY
PDB2                           MOUNTED



SQL> alter pluggable database all open read only;

Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB1                           READ ONLY
PDB2                           READ ONLY

Let us now test this.

We connect to PDB2 and create a test table and populate it with some rows. We will then connect to the standby site and see if the changes have been propagated across.

Note – we have not done any Data Guard set up explicitly for the pluggable database PDB2. It has inherited the Data Guard configuration when we plugged it into the container database CONDB1 where data guard had been setup.

Connect to Primary ….

[oracle@orasql-001-dev condb1]$ sqlplus sh/sh@localhost:1525/pdb2

SQL*Plus: Release 12.1.0.1.0 Production on Fri Nov 8 10:02:12 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sat May 25 2013 04:25:15 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> create table customers_dr as select * from customers;

Table created.


SQL> select count(*) from customers_dr;

  COUNT(*)
----------
     55500

Connect to standby and confirm ….

[oracle@orasql-001-test condb1]$ sqlplus sh/sh@localhost:1523/pdb2

SQL*Plus: Release 12.1.0.1.0 Production on Fri Nov 8 10:03:14 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Fri Nov 08 2013 10:02:12 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select count(*) from customers_dr;

  COUNT(*)
----------
     55500

Using GoldenGate 12c with an Oracle 12c Multitenant database

$
0
0

This note illustrates an example of using GoldenGate 12c with Oracle 12c Multitenant Container databases.

While in most ways, Oracle GoldenGate operates in a multitenant container database the same way that it operates in a regular Oracle database, we will examine some of the main differences when it comes to configuring extract and replicat processes when they are connecting to a pluggable database.

Remember that all PDBs or Pluggable Databases belonging to one single container database share the same redo stream. So GoldenGate has to filter out the redo records for PDBs which it does not need. At the same time each PDB has its own data dictionary so GoldenGate needs to track multiple data dictionaries.

Here are some of the things to keep in mind when dealing with OGG and 12c multitenant architecture.

  • It is available only in integrated capture mode and not classic capture.
  • One extract can be configured to capture changes from multiple PDBs
  • Since we have to use integrated capture mode, a log mining server is involved and this is only accessible from the root container (CDB$ROOT).
  • We have to connect as a common user to attach to the log mining server. For example a user called C##GGADMIN is what we are using in our example.
  • There is a 3 part naming convention in GGSCI and Parameter file. For example Container Name.Schema.Table(or Sequence)
  • The parameter SOURCECATALOG when used enables us to use the earlier Schema.Table type naming convention.
  • Replicat can only connect and apply to one pluggable database.
  • The dbms_goldengate_auth.grant_admin_privilege package grants the appropriate privileges for capture and apply within a multitenant container database. This includes the container parameter, which must be set to ALL, as shown in the following example:

dbms_goldengate_auth.grant_admin_privilege(‘C##GGADMIN’,container=>all)

Let us now look at an example.

We have our source 12c container database (CONDB2) and we want to replicate the SH schema located in the pluggable database SALES to a target 12c pluggable database SALES_DR.

Note that in our source database we have set up supplemental logging by running the following SQL statements:

SQL> ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
SQL> ALTER DATABASE FORCE LOGGING;
SQL> ALTER SYSTEM SWITCH LOGFILE;
SQL>SELECT SUPPLEMENTAL_LOG_DATA_MIN, FORCE_LOGGING FROM v$database;

We have also created a common user called C##GGADMIN and granted all the required privileges

Add supplemental logging for the SH schema

Note: here we are connecting to the pluggable database SALES

GGSCI (orasql-001-dev.mydomain) 1> dblogin userid C##GGADMIN@sales password welcome1
Successfully logged into database SALES.

GGSCI (orasql-001-dev.mydomain) 2> ADD SCHEMATRANDATA SH ALLCOLS

2014-01-09 16:02:29  INFO    OGG-01788  SCHEMATRANDATA has been added on schema SH.

2014-01-09 16:02:30  INFO    OGG-01976  SCHEMATRANDATA for scheduling columns has been added on schema SH.

2014-01-09 16:02:30  INFO    OGG-01977  SCHEMATRANDATA for all columns has been added on schema SH.

Register the Integrated Extract

Note: here we are connecting to the root container database.

GGSCI (orasql-001-dev.mydomain) 1>  dblogin userid C##GGADMIN password welcome1

Successfully logged into database CDB$ROOT.

GGSCI (orasql-001-dev.mydomain) 6> REGISTER EXTRACT ext1 DATABASE  CONTAINER (sales)

Extract EXT1 successfully registered with database at SCN 2147029.

Add the Extract and Data Pump process groups

GGSCI (orasql-001-dev.mydomain) 7> ADD EXTRACT ext1 INTEGRATED TRANLOG, BEGIN NOW
EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 8> ADD EXTTRAIL ./dirdat/lt EXTRACT ext1
EXTTRAIL added.

GGSCI (orasql-001-dev.mydomain) 9> ADD EXTRACT extdp1 EXTTRAILSOURCE ./dirdat/lt BEGIN NOW
EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 10> ADD RMTTRAIL ./dirdat/rt EXTRACT extdp1
RMTTRAIL added.

Note the use of the parameter SOURCECATALOG in the extract parameter file.

SOURCECATALOG specifies a default container in an Oracle multitenant container database for subsequent TABLE or SEQUENCE statements. Enables the use of legacy two-part naming convention (schema.object) where three-part names otherwise would be required for those databases.

So instead of having to use the PDB name SALES and then the schema name SH followed by the table or sequence name,we can just specify the PDB name via the SOURCECATALOG keyword and this will apply until another PDB name is specified by another SOURCECATALOG parameter

SOURCECATALOG sales
TABLE sh.*;

or if we are extracting from two different PDBs (sales,hr) located in the same container database we can specify it like this

SOURCECATALOG sales
TABLE sh.*;
TABLE oe.*;
SOURCECATALOG hr
TABLE hr.*

Note – we will discuss the new 12c Credential Store feature and USERIDALIAS parameter in a subsequent post

GGSCI (orasql-001-dev.mydomain) 11> edit params ext1
EXTRACT ext1
SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_owner
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
EXTTRAIL ./dirdat/lt
SOURCECATALOG sales
TABLE sh.*;

GGSCI (orasql-001-dev.mydomain) 12> edit params extdp1

EXTRACT extdp1
SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_owner
RMTHOST orasql-001-test, MGRPORT 7809 
RMTTRAIL ./dirdat/rt
SOURCECATALOG sales
TABLE sh.*;

Add the Replicat process group connected to the target PDB SALES_DR

Note – we are using something new in 12c OGG called Integrated Replicat – we will discuss more about Integrated Replicats in another post.

GGSCI (orasql-001-test.mydomain) 2> DBLOGIN USERID C##ggadmin@sales_dr, PASSWORD welcome1
Successfully logged into database SALES_DR.

GGSCI (orasql-001-test.mydomain) 4> ADD REPLICAT rep1 INTEGRATED EXTTRAIL ./dirdat/rt
REPLICAT (Integrated) added.

GGSCI (orasql-001-test.mydomain) 5> edit params rep1
REPLICAT rep1
SETENV (ORACLE_SID='condb2')
DBOPTIONS INTEGRATEDPARAMS(parallelism 6)
USERID C##GGADMIN@sales_dr, PASSWORD welcome1
ASSUMETARGETDEFS
MAP sales.sh.*, TARGET sales_dr.sh.*;

Start the Integrated Extract and Data Pump

GGSCI (orasql-001-dev.mydomain) 11> start extract ext1

Sending START request to MANAGER ...
EXTRACT EXT1 starting

GGSCI (orasql-001-dev.mydomain) 12> info extract ext1

EXTRACT    EXT1      Initialized   2014-01-09 16:14   Status STARTING
Checkpoint Lag       00:00:00 (updated 02:03:22 ago)
Process ID           1139
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2014-01-09 16:14:32
                     SCN 0.0 (0)

GGSCI (orasql-001-dev.mydomain) 14> start extract extdp1

Sending START request to MANAGER ...
EXTRACT EXTDP1 starting

GGSCI (orasql-001-dev.mydomain) 15> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:06      00:00:01
EXTRACT     RUNNING     EXTDP1      00:00:00      01:59:48

GGSCI (orasql-001-dev.mydomain) 18> info extract ext1

EXTRACT    EXT1      Last Started 2014-01-09 18:18   Status RUNNING
Checkpoint Lag       00:00:06 (updated 00:00:09 ago)
Process ID           1139
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2014-01-09 18:21:29
                     SCN 0.2195794 (2195794)

GGSCI (orasql-001-dev.mydomain) 19>  info extract extdp1

EXTRACT    EXTDP1    Last Started 2014-01-09 18:18   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           1216
Log Read Checkpoint  File ./dirdat/lt000000
                     2014-01-09 16:18:48.000000  RBA 1478

Connect to source PDB and make some changes

[oracle@orasql-001-dev goldengate]$ sqlplus sh/sh@localhost:1525/sales

SQL*Plus: Release 12.1.0.1.0 Production on Thu Jan 9 18:22:38 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Thu Jan 09 2014 15:35:34 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> update customers set cust_city='Perth';

55500 rows updated.

SQL> commit;

Connect to target PDB and confirm changes have been applied

[oracle@orasql-001-test]$ sqlplus sh/sh@localhost:1525/sales_dr

SQL*Plus: Release 12.1.0.1.0 Production on Wed Jan 15 10:55:56 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Fri Jan 10 2014 13:35:21 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>  select distinct cust_city from customers;

CUST_CITY
------------------------------
Perth

GoldenGate 12c New Feature – Credential Store and USERIDALIAS

$
0
0

The ADD CREDENTIALSTORE is a new command in Oracle GoldenGate 12c.

The credential store eliminates the need to specify user names and clear-text passwords in the Oracle GoldenGate parameter files. It is implemented as an autologin wallet within the Oracle Credential Store Framework (CSF).

We can use the USERIDALIAS in an extract or replicat parameter file to map a user specified alias to a userid-password pair which is stored in the credential store.

Let us see an example.

GGSCI (kens-orasql-001-dev.corporateict.domain) 3> ADD CREDENTIALSTORE

Credential store created in ./dircrd/.

[oracle@kens-orasql-001-dev goldengate]$ cd dircrd
[oracle@kens-orasql-001-dev dircrd]$ ls
cwallet.sso

We see that the credential store has been created in the dircrd sub-directory located in the GoldenGate software installation home. If we need to create it in any other location like a shared file system, we have to specify that via the CREDENTIALSTORELOCATION parameter in the GLOBALS file.

We now want to add some users to the credential store.

In our earlier post we had explained how to use Oracle GoldenGate 12c with an Oracle 12c Multitenant database.

We had created a common user C##GGADMIN and now want to add that user to the credential store

GGSCI (kens-orasql-001-dev.corporateict.domain) 1> ALTER CREDENTIALSTORE ADD USER c##ggadmin ALIAS gg_root
Password:

Credential store in ./dircrd/ altered.

We can now use the USERIDALIAS parameter in extract and replicat parameter files as well as with the DBLOGIN command as shown here.

GGSCI (kens-orasql-001-dev.corporateict.domain) 3> DBLOGIN USERIDALIAS gg_root
Successfully logged into database CDB$ROOT.

Suppose we want to connect to one of the PDBs in that container database called SALES and want to create an alias for this connection and store it in the credential store as well.

GGSCI (kens-orasql-001-dev.corporateict.domain) 4>  ALTER CREDENTIALSTORE ADD USER c##ggadmin@sales ALIAS gg_sales
Password:

Credential store in ./dircrd/ altered.

GGSCI (kens-orasql-001-dev.corporateict.domain) 5> DBLOGIN USERIDALIAS gg_sales
Successfully logged into database SALES.

GGSCI (kens-orasql-001-dev.corporateict.domain) 6> INFO CREDENTIALSTORE

Reading from ./dircrd/:

Domain: OracleGoldenGate

Alias: gg_root
Userid: c##ggadmin

Alias: gg_sales
Userid: c##ggadmin@sales

GoldenGate 12c New Feature – Integrated Replicat

$
0
0

One of the new features in Goldengate 12c is the Integrated Replicated or Integrated Apply feature.

Note that Integrated Extract was introduced in GoldenGate 11g  to go along with what is now termed as Classic Extract.

Keep in mind that to use Integrated Replicat the target database needs to be version 11.2.0.4 or later and this feature cannot be used if the target database is non-Oracle.

Integrated Replicat is useful in case of heavy workloads where additional parallel apply processes are created and co-ordinated automatically enabling transactions to be applied in parallel without any changes being done to the replicat parameter file.

The Replicat process reads the trail file and then constructs logical change records (LCRs) which are then transmitted to the database using what is known as a Lightweight Streaming API.

A number of database apply processes are created each with their own function:

  • Receiver: Reads LCRs
  • Preparer: Computes the dependencies between the transactions (primary key, unique indexes, foreign key) , grouping transactions and sorting in dependency order.
  • Coordinator: Coordinates transactions, maintains the order between applier processes.
  • Applier:  Performs changes for assigned transactions, including conflict detection and error handling

The integrated replicat has autotuning based on workload.  Based on the number of LCRs being processed, additional parallel apply processes are added or removed .

This is controlled via two parameters in the replicat parameter file – PARALLELISM and MAX_PARALLELISM .

PARALLELISM is the minimum number of parallel apply processes. Default is 4.

MAX_PARALLELISM is the maximum number of apply servers.  Default is 30.

GoldenGate will automatically increase and decrease the number of apply servers based on workload using MAX_PARALLELISM parameter.

Integrated Replicat does not require a checkpoint table to be set up as in the case of the conventional or classic replicat which we used in earlier GoldenGate versions.

The START REPLICAT command automatically registers the integrated replicat with the target Oracle database.

Integrated Replicat applies transactions asynchronously.

Transactions that do not have inter-dependencies can be safely executed and committed out of order to achieve fast throughput. Transactions with dependencies are guaranteed to be applied in the same order as on the source.

The integrated replicat requires the source extract parameter file to contain these new parameters introduced in 12c – LOGALLSUPCOLS and UPDATERECORDFORMAT COMPACT.

From the documentation:

LOGALLSUPCOLS causes Extract to do the following with these supplementally logged columns:

  • Automatically includes in the trail record the before image for UPDATE operations.
  • Automatically includes in the trail record the before image of all supplementally logged columns for both UPDATE and DELETE operations

UPDATERECORDFORMAT

By default, when Extract is configured to generate before images, the before image is stored in a separate record from the after image in the trail.

When two records are generated for an update to a single row, it incurs additional disk I/O and processing for both Extract and Replicat. If supplemental logging is enabled on all columns, the unmodified columns may be repeated in both the the before and after records. The overall size of the trail is larger, as well. This overhead is reduced by using UPDATERECORDFORMAT.

When UPDATERECORDFORMAT is used, Extract writes the before and after images to a single record that contains all of the information needed to process an UPDATE operation.

 

Let us look at an example of configuring and using an Integrated Replicat.

We have a source table MYTAB residing in an Oracle 12c pluggable database called SALES and we will be replicating this table to another 12c target pluggable database called SALES_DR.

Using the new CREDENTIALSTORE feature we have created two user accounts – one called gg_root which connects to the root container in a 12c Container database and another called gg_sales which connects to the PDB called SALES.

 

Add supplemental logging at the table level

GGSCI (orasql-001-dev.mydomain) 1> dblogin useridalias gg_sales

Successfully logged into database SALES.

GGSCI (orasql-001-dev.mydomain) 4> ADD  TRANDATA SALES_DR.SH.MYTAB ALLCOLS

Logging of supplemental redo data enabled for table SALES.SH.MYTAB.

TRANDATA for scheduling columns has been added on table 'SALES.SH.MYTAB'.TRANDATA for all columns has been added on table 'SALES.SH.MYTAB'.

 

Register the integrated extract

GGSCI (orasql-001-dev.mydomain) 6> DBLOGIN USERIDALIAS gg_root

Successfully logged into database CDB$ROOT.

 

GGSCI (orasql-001-dev.mydomain) 7> REGISTER EXTRACT myext1 DATABASE  CONTAINER (sales)

Extract MYEXT1 successfully registered with database at SCN 3669081.

 

Add the Integrated Extract and Data Pump

GGSCI (orasql-001-dev.mydomain) 8> ADD EXTRACT myext1 INTEGRATED TRANLOG, BEGIN NOW

EXTRACT added.

 

GGSCI (orasql-001-dev.mydomain) 9> ADD EXTTRAIL ./dirdat/xx EXTRACT myext1

EXTTRAIL added.

 

GGSCI (orasql-001-dev.mydomain) 10> ADD EXTRACT mydp1 EXTTRAILSOURCE ./dirdat/xx BEGIN NOW

EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 11> ADD RMTTRAIL ./dirdat/rx EXTRACT mydp1

RMTTRAIL added.

 

Edit the Integrated Extract Parameter File

 

GGSCI (orasql-001-dev.mydomain) 11> edit params myext1

 

EXTRACT myext1

SETENV (ORACLE_SID='condb2')

USERIDALIAS gg_root

LOGALLSUPCOLS

UPDATERECORDFORMAT COMPACT

EXTTRAIL ./dirdat/xx

SOURCECATALOG sales

TABLE sh.mytab;

 

Edit the Data Pump Parameter File

GGSCI (orasql-001-dev.mydomain) 12> edit params mydp1

 

EXTRACT mydp1

SETENV (ORACLE_SID='condb2')

USERIDALIAS gg_owner

RMTHOST orasql-001-test, MGRPORT 7809

RMTTRAIL ./dirdat/rx

SOURCECATALOG sales

TABLE sh.mytab;

 

On the target, add the Integrated Replicat

GGSCI (orasql-001-test.mydomain) 1> DBLOGIN USERID C##ggadmin@sales_dr, PASSWORD welcome1

Successfully logged into database SALES_DR.

 

GGSCI (orasql-001-test.mydomain) 2> ADD REPLICAT myrep1 INTEGRATED EXTTRAIL ./dirdat/rx

REPLICAT (Integrated) added.

 

GGSCI (orasql-001-test.mydomain) 5> edit params myrep1

 

REPLICAT myrep1

SETENV (ORACLE_SID='condb2')

DBOPTIONS INTEGRATEDPARAMS(parallelism 6)

USERID C##GGADMIN@sales_dr, PASSWORD welcome1

ASSUMETARGETDEFS

MAP sales.sh.mytab, TARGET sales_dr.sh.mytab;

 

Note:

The parameter DBOPTIONS INTEGRATEDPARAMS(parallelism 6) denotes that this for this integrated replicat,  we are specifying that the minimum number of parallel apply processes will be 6.

 

Start the Integrated Extract, Data Pump and Integrated Replicat via the START EXTRACT and START REPLICAT commands.

 

Check the status of the Replicat process as well as the database server apply processes.

GGSCI (orasql-001-test.mydomain) 6> info replicat myrep1

 

REPLICAT   MYREP1    Last Started 2014-01-15 13:25   Status RUNNING

INTEGRATED

Checkpoint Lag       00:00:00 (updated 00:02:09 ago)

Process ID           20828

Log Read Checkpoint  File ./dirdat/rx000000

First Record  RBA 0

 

SQL> select REPLICAT_NAME,SERVER_NAME from DBA_GOLDENGATE_INBOUND;

REPLICAT_NAME        SERVER_NAME
-------------------- --------------------
MYREP1               OGG$MYREP1


SQL> select  APPLY_NAME,QUEUE_NAME,status from dba_apply;

APPLY_NAME           QUEUE_NAME           STATUS
-------------------- -------------------- --------
OGG$MYREP1           OGGQ$MYREP1          ENABLED


SQL> select apply_name,state from V$GG_APPLY_COORDINATOR ;

APPLY_NAME                     STATE
------------------------------ ---------------------
OGG$MYREP1                     IDLE


Note: Because we had configured PARALLELISM to be 6 via the DBOPTIONS INTEGRATEDPARAMS(parallelism 6) in the replicat parameter file, we will see 6 apply server processes which are ready to run.

At this stage they are IDLE and have not received or applied any messages or LCRs.

SQL> select server_id,TOTAL_MESSAGES_APPLIED from V$GG_APPLY_SERVER
  2  where apply_name= OGG$MYREP1';

 SERVER_ID TOTAL_MESSAGES_APPLIED
---------- ----------------------
         4                      0
         2                      0
         6                      0
         5                      0
         3                      0
         1                      0

6 rows selected.

Populate the base table and monitor the extract process

We now insert a million rows into our target table and see that the extract has processed those newly added rows.

GGSCI (kens-orasql-001-test.mydomain) 1> stats extract myext1

Sending STATS request to EXTRACT MYEXT1 ...

Start of Statistics at 2014-01-20 16:21:26.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                         1.00

Output to ./dirdat/ex:

Extracting from SALES.SH.MYTAB to SALES_DR.SH.MYTAB:

*** Total statistics since 2014-01-20 16:19:54 ***
        Total inserts                                1000000.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                             1000000.00

Monitor the Integrated Replicat

GGSCI (orasql-001-dev.mydomain) 2> info replicat myrep1

REPLICAT   MYREP1    Last Started 2014-01-20 16:12   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           4794
Log Read Checkpoint  File ./dirdat/rx000000
                     2014-01-20 16:22:51.183273  RBA 54918866


GGSCI (orasql-001-dev.mydomain) 1> stats replicat myrep1

Sending STATS request to REPLICAT MYREP1 ...

Start of Statistics at 2014-01-20 17:47:25.


Integrated Replicat Statistics:

        Total transactions                                 5.00
        Redirected                                         0.00
        DDL operations                                     0.00
        Stored procedures                                  0.00
        Datatype functionality                             0.00
        Event actions                                      0.00
        Direct transactions ratio                          0.00%

Replicating from SALES.SH.MYYTAB to SALES_DR.SH.MYTAB:

*** Total statistics since 2014-01-20 16:20:05 ***
        Total inserts                                1000000.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                             1000000.00

Monitor the status of the Database Apply Server Processes

SQL> select apply_name,state from V$GG_APPLY_COORDINATOR ;

APPLY_NAME                     STATE
------------------------------ ---------------------
OGG$MYREP1                     APPLYING


SQL>  select server_id,TOTAL_MESSAGES_APPLIED from V$GG_APPLY_SERVER
  2  where apply_name='OGG$MYREP1';


 SERVER_ID TOTAL_MESSAGES_APPLIED
---------- ----------------------
         4                      0
         2                 388462
         6                      0
         5                      0
         3                      0
         1                 611543



SQL> select apply_name,state,TOTAL_MESSAGES_DEQUEUED,  TOTAL_MESSAGES_SPILLED
  2  from  V$GG_APPLY_READER;

APPLY_NAME                     STATE
------------------------------ ------------------------------------
TOTAL_MESSAGES_DEQUEUED TOTAL_MESSAGES_SPILLED
----------------------- ----------------------
OGG$MYREP1                     IDLE
                1000005                      0


SQL>  select APPLY_NAME,TOTAL_APPLIED, TOTAL_RECEIVED from V$GG_APPLY_COORDINATOR;

APPLY_NAME                     TOTAL_APPLIED TOTAL_RECEIVED
------------------------------ ------------- --------------
OGG$MYREP1                                 5              5



SQL> select apply_name,state from V$GG_APPLY_COORDINATOR ;

APPLY_NAME                     STATE
------------------------------ ---------------------
OGG$MYREP1                     IDLE

12c GoldenGate New Feature – Coordinated Replicat

$
0
0

In one of the the earlier posts we had discussed the GoldenGate 12c Integrated Apply or Integrated Replicat feature. It enables high volume transactions to be applied in parallel.

But it was only supported for Oracle databases and also required to be version 11.2.0.4 and higher.

The Coordinated Replicat feature is new in GoldenGate 12c where the Replicat is multi-threaded and a single replicat instance, multiple threads read the trail independently and apply transactions in parallel. One coordinator thread spawns and coordinates one or more threads that execute replicated SQL operations in parallel.

The main difference between the Integrated Replicat and the Coordinated Replicat is that while in case of the Integrated Replicat, GoldenGate will add (or remove) additional apply server processes depending on the workload, in case of Coordinated Replicat it is user defined partitioning of the workload so as to apply high volume transactions concurrently and in parallel. This is done via the parameter THREADS and MAXTHREADS which we will discuss in this post using an example.

In earlier versions, scalability was enabled by fanning out the work to multiple replicats when the work could not be handled by a single replicat – but this required us to have multiple extracts, data pumps and replicat groups (and parameter files as well).

For example we had to create three separate replicat groups and use the RANGE parameter:

REP1.PRM
MAP sales.acct, TARGET sales.acct, FILTER (@RANGE (1, 3, ID));

REP2.PRM
MAP sales.acct, TARGET sales.acct, FILTER (@RANGE (2, 3, ID));

REP3.PRM
MAP sales.acct, TARGET sales.acct, FILTER (@RANGE (3, 3, ID))

Now in Goldengate 12c Coordinated replicat or delivery, there is a single replicat parameter file and additional replicat groups are created automatically and a single coordinator process or thread spawns additional threads and assigns individual workloads to each thread. Partitioning of workload is done via the THREADRANGE parameter used in the MAP statement.

For example now we require just one single replicat parameter file:

REP.PRM
MAP sales.acct, TARGET sales.acct, THREADRANGE(1-3, ID));

So if the target database is an Oracle version that does not support integrated Replicat, or if it is a non-Oracle database, we can use a coordinated Replicat feature to more or less achieve the same benefits of Integrated Replicat which is to provide higher throughput of transaction application on the target database by processing workload in parallel.

Let us now look at an example of using a Coordinated Replicat.

As in the case of the previous example using Integrated Replicat, the source database is an Oracle 12c Pluggable Database called SALES and we are replicating to another Oracle 12c Pluggable Database called SALES_DR.

We have created the table MYOBJECTS in both the source and target databases and have already enabled supplemental logging at the schema level.

SQL> create table myobjects as select * from all_objects where 1=2;

Table created.

SQL> alter table myobjects add constraint pk_myobjects primary key (object_id);

Table altered.

SQL> grant all on myobjects to C##GGADMIN;

Grant succeeded.

 

On the source we have created the Extract and Data Pump groups – we are using Integrated Extract in this case.


 
Register the integrated extract
 
GGSCI (orasql-001-dev.mydomain) 6> DBLOGIN USERIDALIAS gg_root

Successfully logged into database CDB$ROOT.

GGSCI (orasql-001-dev.mydomain) 7> REGISTER EXTRACT myext1 DATABASE  CONTAINER (sales)

Extract MYEXT1 successfully registered with database at SCN 3669081.

 
Add the Integrated Extract and Data Pump 
 
GGSCI (orasql-001-dev.mydomain) 8> ADD EXTRACT myext1 INTEGRATED TRANLOG, BEGIN NOW

EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 9> ADD EXTTRAIL ./dirdat/lt EXTRACT myext1

EXTTRAIL added.

GGSCI (orasql-001-dev.mydomain) 10> ADD EXTRACT mydp1 EXTTRAILSOURCE ./dirdat/lt BEGIN NOW

EXTRACT added.

GGSCI (orasql-001-dev.mydomain) 11> ADD RMTTRAIL ./dirdat/rt EXTRACT mydp1

RMTTRAIL added.

 
Edit the Integrated Extract Parameter File
 
GGSCI (orasql-001-dev.mydomain) 11> edit params myext1

EXTRACT myext1

SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_root
LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT
EXTTRAIL ./dirdat/lt
SOURCECATALOG sales
TABLE sh.myobjects;

 
Edit the Data Pump Parameter File 
 
GGSCI (orasql-001-dev.mydomain) 12> edit params mydp1

EXTRACT mydp1
SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_owner
RMTHOST orasql-001-test, MGRPORT 7809
RMTTRAIL ./dirdat/rt
SOURCECATALOG sales
TABLE sh.myobjects;

 


On the target, add the Coordinated Replicat
 

GGSCI (orasql-001-test.mydomain) 1> DBLOGIN USERIDALIAS gg_sales
Successfully logged into database SALES_DR.

GGSCI (orasql-001-dev.mydomain) 4> add replicat rep1, coordinated, EXTTRAIL ./dirdat/rt, maxthreads 5
REPLICAT (Coordinated) added.

GGSCI (orasql-001-dev.mydomain) 1> view params rep1

REPLICAT rep1
SETENV (ORACLE_SID='condb2')
USERIDALIAS gg_sales
ASSUMETARGETDEFS
MAP sales.sh.myobjects, TARGET sales_dr.sh.myobjects,
THREADRANGE(1-5, OBJECT_ID));

GGSCI (kens-orasql-001-dev.corporateict.domain) 5> start replicat rep1

Sending START request to MANAGER ...
REPLICAT REP1 starting

GGSCI (kens-orasql-001-dev.corporateict.domain) 6> info replicat rep1

REPLICAT   REP1      Last Started 2014-01-23 10:38   Status RUNNING
COORDINATED          Coordinator                      MAXTHREADS 5
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Process ID           25811
Log Read Checkpoint  File ./dirdat/rt000000
                     First Record  RBA 0

GGSCI (kens-orasql-001-dev.corporateict.domain) 8>  info replicat rep1 detail

REPLICAT   REP1      Last Started 2014-01-23 10:38   Status RUNNING
COORDINATED          Coordinator                      MAXTHREADS 5
Checkpoint Lag       00:00:00 (updated 00:00:00 ago)
Process ID           25811
Log Read Checkpoint  File ./dirdat/rt000000
                     First Record  RBA 1642

 
We now populate the source table with some data and check if the extract has captured the change data
 

SQL> insert into myobjects
  2  select * from all_objects;

77694 rows created.

SQL> commit;

Commit complete.

GGSCI (kens-orasql-001-test.corporateict.domain) 1> stats extract ext1 latest

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2014-01-23 10:41:54.

Output to ./dirdat/lt:

Extracting from SALES.SH.MYOBJECTS to SALES_DR.SH.MYOBJECTS:

*** Latest statistics since 2014-01-23 10:41:33 ***
        Total inserts                                  77694.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               77694.00

End of Statistics.

 

We can now see that the Replicat has spawned 5 additional threads (because we had specified MAXTHREADS 5) and we see the addtional replicat groups have been created (REP1001 to REP1005).
 

GGSCI (orasql-001-dev.mydomain) 9> info replicat rep1 detail

REPLICAT   REP1      Last Started 2014-01-23 10:48   Status RUNNING
COORDINATED          Coordinator                      MAXTHREADS 5
Checkpoint Lag       00:03:11 (updated 00:00:00 ago)
Process ID           26831
Log Read Checkpoint  File ./dirdat/rt000000
                     2014-01-23 10:46:29.747584  RBA 28181513

Lowest Log BSN value: 

Active Threads:
ID  Group Name PID   Status   Lag at Chkpt  Time Since Chkpt
1   REP1001    26838 RUNNING  00:00:00      00:00:20
2   REP1002    26839 RUNNING  00:00:00      00:00:20
3   REP1003    26840 RUNNING  00:00:00      00:00:20
4   REP1004    26841 RUNNING  00:00:00      00:00:20
5   REP1005    26842 RUNNING  00:00:00      00:00:20

GGSCI (orasql-001-dev.mydomain) 2> info replicat rep1001

REPLICAT   REP1001   Last Started 2014-01-23 10:48   Status RUNNING
COORDINATED          Replicat Thread                  Thread 1
Checkpoint Lag       00:00:00 (updated 00:00:05 ago)
Process ID           26838
Log Read Checkpoint  File ./dirdat/rt000000
                     2014-01-23 10:49:24.008242  RBA 56361384

 

About 77000 rows were inserted in the target table and we can see that the workload has been distributed by the replicat coordinator process among the 5 threads – so each thread has processed about 15000 rows each.
 

GGSCI (orasql-001-dev.mydomain) 2> stats replicat rep1001

Sending STATS request to REPLICAT REP1001 ...

Start of Statistics at 2014-01-23 10:51:44.

Replicating from SALES.SH.MYOBJECTS to SALES_DR.SH.MYOBJECTS:

*** Total statistics since 2014-01-23 10:49:31 ***
        Total inserts                                  15748.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               15748.00

GGSCI (orasql-001-dev.mydomain) 4>

GGSCI (orasql-001-dev.mydomain) 3> stats replicat rep1005

Sending STATS request to REPLICAT REP1005 ...

Start of Statistics at 2014-01-23 10:52:09.

Replicating from SALES.SH.MYOBJECTS to SALES_DR.SH.MYOBJECTS:

*** Total statistics since 2014-01-23 10:49:31 ***
        Total inserts                                  15640.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               15640.00

Oracle Database 12c New Feature – Data Redaction

$
0
0

Data Redaction is one of the new Advanced Security features introduced in Oracle Database 12c.

It basically shields sensitive data from the application end users and this is done on the fly without any modification being done to the application.

This is different to Oracle Data Masking where data is transformed using masking formats and this updated masked data is stored in new data blocks.

We can create redaction policies which basically govern what condition needs to be satisfied before the data gets redacted, what columns in the table we are going to shield or apply redaction to and how are we going to perform the data redaction.

When the application issues a SQL statement, data is retrieved from the database and the redaction policy is then applied.

Let us look at a few examples of Data Redaction using the 12c Cloud Control and a 12c Container Database.

Note that we can do the same from the command line using the DBMS_REDACT.ADD_POLICY and DBMS_REDACT.ALTER_POLICY APIs.

In this example we have installed the plug-in for 12c Database management via Cloud Control and we can see the container database (CONDB2) and the pluggable database (SALES) as managed targets.

We can work with Data Redaction in Cloud Control and this is available from the Administration/Security menu

Let us now create a new Data Redaction policy.

Click on the Create button

In this example we will be creating a redaction policy called TEST_REDACTION and this will be applied to the EMP table in the SCOTT schema.

Click on the pencil icon which will launch the Policy Expression Builder.


The criteria for enforcing this policy is that the database user should be a non-DBA.
We can see that a redaction policy expression has been created :

SYS_CONTEXT(‘USERENV’,'ISDBA’)=’FALSE’

We can have a look at the DBMS_REDACT.ADD_POLICY command which has been issued in the background.


We will now specify what columns in the table we are going to redact and what kind of redaction policy we are going to apply.

We are going to hide the data contained in the SAL column of the table from any non-DBA database user account and we are not using a pre-defined template but will create our own Custom policy.

The redaction methods available are Full, Partial, Random and Regular expression.

In Full redaction columns are redacted to a constant value depending on the data type of the redacted column – like say a 0 for a NUMBER column.

In Partial the user can specify what positions in the data which will be replaced by user specified characters.

In Regular Expression a match and replace is performed based on some parameters.
 

Redaction Method	Original Data 		Redacted Data
 
Full    		100000			0
Partial 		543-46-2457 		xxx-xx-2457                    
Regular Expression 	gavin@oracle.com	xxx@oracle.com
Random			123456			321654


 
In the first example we specify FULL

Have a look at the DBMS_REDACT.ALTER_POLICY statement which has been issued


We can see that the TEST_REDACTION policy has been created and there is now one redacted column in the EMP table.


Let us now edit the TEST_REDACTION policy and add another column to the redacted columns in the table.

Similarly if the user is non-DBA, we want to hide the data in the HIREDATE column and we are using the PARTIAL Redaction Function this time.

The redaction format we use – m01d01y2001, will transform the data in the HIREDATE column when the query is run on the EMP table querying the HIREDATE column and will return a value of ’01-JAN-2001′ instead of the actual HIREDATE column value.


We now see that there are two redacted columns in the EMP table.


Let us now test the redaction policy we have just created.

In the first instance we connect as SCOTT which does not have the DBA role granted to it and query the EMP table.

Notice the redaction policy in action and how the actual data is being shielded from the user.

 

[oracle@orasql-001-dev ~]$ sqlplus scott/tiger@localhost:1525/sales

SQL*Plus: Release 12.1.0.1.0 Production on Wed Jan 29 13:49:50 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Sat May 25 2013 04:26:41 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select ename,sal from emp;

ENAME             SAL
---------- ----------
SMITH               0
ALLEN               0
WARD                0
JONES               0
MARTIN              0
....
....

 

SQL> select ename,hiredate from scott.emp;

ENAME      HIREDATE
---------- ---------
SMITH      01-JAN-01
ALLEN      01-JAN-01
WARD       01-JAN-01
JONES      01-JAN-01
MARTIN     01-JAN-01
....
....

 

Let us connect as SYS and see the difference.

All the data is being returned because the use is a DBA user account unlike SCOTT.

SQL> conn / as sysdba
Connected.
SQL> alter session set container=sales;

Session altered.

SQL> select ename,sal from scott.emp;

ENAME             SAL
---------- ----------
SMITH             800
ALLEN            1600
WARD             1250
JONES            2975
MARTIN           1250
BLAKE            2850
.....
....

SQL> select ename,hiredate from scott.emp;

ENAME      HIREDATE
---------- ---------
SMITH      17-DEC-80
ALLEN      20-FEB-81
WARD       22-FEB-81
....
....

 

We have successfully shielded or masked data we consider to be sensitive from certain end users without any modification being required to be made to the application and have done the same with minimal effort using 12c Cloud Control!

Goldengate 12c – Configuring CDC between an Oracle 12c Multitenant Container Database and SQL Server 2012

$
0
0

This note describes the process of configuring a initial data load job as well as Change Data Capture from an Oracle 12c pluggable database source and a MS SQL Server 2012 target database.

Download the note here ….

GoldenGate real-time replication from Active Standby Database to SQL Server 2012 target

$
0
0

This note describes how to run an Initial Load along with Change Data Capture from a source Oracle 11g R2 Active Standby database (ALO Archived Log Only mode capture) to a MS SQL Server 2012 target database.

The table is a 6.3 million row table – AC_AMOUNT in the IDIT_PRD schema.

Steps

Create the Initial Load Extract

GGSCI (db02) 2> add extract testini1 sourceistable
EXTRACT added.


extract testini1
setenv (ORACLE_SID="DEVSB2")
setenv (ORACLE_HOME="/opt/oracle/product/server/11.2.0.3")
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.corp, MGRPORT 7809,   tcpbufsize 10485760, tcpflushbytes 10485760
rmtfile ./dirdat/rr, maxfiles 999999, megabytes 200, append
TRANLOGOPTIONS ARCHIVEDLOGONLY
TRANLOGOPTIONS ALTARCHIVELOGDEST  /u03/oracle/DEVSB2/arch/
TABLE IDIT_PRD.AC_ACCOUNT;

Notes on using ALO mode:

1) The connection is to the open Active Standby database and not the primary database

2) The TRANLOGOPTIONS ARCHIVEDLOGONLY parameter has to be used to indicate the extract needs to read the archive log files on the standby database host and not the online redo log files on the primary database host

3) If we are using the FRA as a location for the archive redo log files, then we need to ensure that the LOG_ARCHIVE_DEST_1 parameter on the standby database has to be set to a directory other than the FRA because in ALO mode OGG cannot read archive log files from directories based on date formats as is the case in the FRA where for every day, a new directory is created based on the date.

Refer MOS note: ALO OGG Extract Unable to Find Archive Logs Under Date Coded sub Directories (Doc ID 1359776.1)

4) We have not specified the parameter COMPLETEARCHIVEDLOGONLY. This is the default in ALO mode. It forces Extract to wait for the archived log to be written to disk completely before starting to process redo data.

It is recommended NOT to use the NOCOMPLETEARCHIVEDLOGONLY parameter which is the default value for Classic Extract if we are using the ALO mode.

Create the Initial Load Replicat

GGSCI (DCV-RORSQL-N001) 125> add replicat testrep1 exttrail ./dirdat/rr
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 126> edit params testrep1
REPLICAT testrep1
TARGETDB sqlserver2012
SOURCEDEFS ./dirdef/source9.def
BATCHSQL
MAP IDIT_PRD.AC_ACCOUNT, TARGET IDIT_PRD.AC_ACCOUNT;


Create the CDC Extract

GGSCI (db02) 5> add extract cdcext tranlog begin now
EXTRACT added.


GGSCI (db02) 6> add rmttrail ./dirdat/rs extract cdcext
RMTTRAIL added.



GGSCI (db02) 2> edit params cdcext

Extract cdcext
setenv (ORACLE_SID="DEVSB2")
setenv (ORACLE_HOME="/opt/oracle/product/server/11.2.0.3")
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.corp, MGRPORT 7809
RMTTRAIL ./dirdat/rs
TRANLOGOPTIONS ARCHIVEDLOGONLY
TRANLOGOPTIONS ALTARCHIVELOGDEST  /u03/oracle/DEVSB2/arch/
TABLE IDIT_PRD.AC_ACCOUNT;

Note:
Since the CDC extract is reading archive log files from the Active Standby we have to specify the archive log sequence and position to start reading from

GGSCI (db02) 1> alter extract cdcext extseqno 105 extrba 188119040
EXTRACT altered.

Create the CDC Replicat on MS SQL Server 2012 target

GGSCI (DCV-RORSQL-N001) 131> add replicat repcdc exttrail ./dirdat/rs
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 133> edit params repcdc

REPLICAT repcdc
TARGETDB sqlserver2012
SOURCEDEFS ./dirdef/source9.def
MAP IDIT_PRD.AC_ACCOUNT, TARGET IDIT_PRD.AC_ACCOUNT;

Start the CDC Extract before the Initial Load Extract – Do not start the CDC Replicat!

GGSCI (db02) 2> start extract extcdc

Sending START request to MANAGER ...
EXTRACT EXTCDC starting


GGSCI (db02) 3> info extract extcdc

EXTRACT    EXTCDC    Last Started 2014-03-06 12:48   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:05:48 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     First Record
                     SCN 0.0 (0)

Start the Initial Load Extract

GGSCI (db02) 6> start extract testini1

Sending START request to MANAGER ...
EXTRACT TESTINI1 starting


GGSCI (db02) 7> info extract testini1

EXTRACT    TESTINI1  Initialized   2014-03-06 12:25   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE


GGSCI (db02) 12> !
info extract testini1

EXTRACT    TESTINI1  Last Started 2014-03-06 12:50   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table IDIT_PRD.AC_ACCOUNT
                     2014-03-06 12:50:29  Record 304670
Task                 SOURCEISTABLE


While the Initial Load Extract is running we perform a transaction on the Primary Oracle database

SQL> update idit_prd.ac_account
 2  set FREEZE_DATE='01-JAN-2020'
  3  where id=1;

1 row updated.

SQL> commit;

Commit complete.

SQL> alter system switch logfile;

System altered.

While the Initial Load Extract is running we start the Initial Load Replicat

GGSCI (DCV-RORSQL-N001) 135> start replicat testrep1

Sending START request to MANAGER ('MANAGER') ...
REPLICAT TESTREP1 starting


GGSCI (DCV-RORSQL-N001) 136> info replicat testrep1

REPLICAT   TESTREP1  Last Started 2014-03-06 12:53   Status RUNNING
Checkpoint Lag       00:02:39 (updated 00:00:00 ago)
Process ID           6764
Log Read Checkpoint  File ./dirdat/rr000000
                     2014-03-06 12:50:39.244754  RBA 21561942

We will start the CDC Replicat only after the Initial load has been completed on the target database

At this point in time initial load replicat is still inserting rows on target

GGSCI (DCV-RORSQL-N001) 137> stats replicat testrep1

Sending STATS request to REPLICAT TESTREP1 ...

Start of Statistics at 2014-03-06 12:53:54.

Replicating from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Total statistics since 2014-03-06 12:53:11 ***
        Total inserts                                 276172.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                              276172.00

Now the initial load is completed and we see that 6359427 rows have been extracted

GGSCI (db02) 17> info extract testini1

EXTRACT    TESTINI1  Last Started 2014-03-06 12:50   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table IDIT_PRD.AC_ACCOUNT
                     2014-03-06 12:53:27  Record 6359427
Task                 SOURCEISTABLE

The CDC extract is meanwhile running and we see that it has captured the UPDATE statement we executed

GGSCI (db02) 30> info extract cdcext

EXTRACT    CDCEXT    Last Started 2014-03-06 13:01   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:05 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2014-03-06 12:55:11  Seqno 107, RBA 3587072
                     SCN 2.2325484702 (10915419294)


GGSCI (db02) 31> stats extract cdcext

Sending STATS request to EXTRACT CDCEXT ...

Start of Statistics at 2014-03-06 13:04:54.

Output to ./dirdat/rs:

Extracting from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Total statistics since 2014-03-06 13:01:43 ***
        Total inserts                                      0.00
        Total updates                                  1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               1.00

We will start the CDC Replicat only after the initial load replicat has inserted all the rows into the MS SQL Server 2012 target

GGSCI (DCV-RORSQL-N001) 141> send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 690 seconds.


GGSCI (DCV-RORSQL-N001) 142> !
send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 715 seconds.


GGSCI (DCV-RORSQL-N001) 143> !
send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 759 seconds.

When we see the “At EOF , no more records to process” it means the initial load is now complete

GGSCI (DCV-RORSQL-N001) 146> send replicat testrep1 getlag

Sending GETLAG request to REPLICAT TESTREP1 ...
Last record lag 1,072 seconds.
At EOF, no more records to process.



GGSCI (DCV-RORSQL-N001) 147> stats replicat testrep1 latest

Sending STATS request to REPLICAT TESTREP1 ...

Start of Statistics at 2014-03-06 13:10:26.

Replicating from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Latest statistics since 2014-03-06 12:53:11 ***
        Total inserts                                6359427.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                             6359427.00


We now can start the CDC Replicat on target

GGSCI (DCV-RORSQL-N001) 182> start replicat repcdc

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPCDC starting


GGSCI (DCV-RORSQL-N001) 183> info replicat repcdc

REPLICAT   REPCDC    Last Started 2014-03-06 13:49   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Process ID           6472
Log Read Checkpoint  File ./dirdat/rs000001
                     2014-03-06 13:49:28.714735  RBA 4111468

We can see that it has applied the one single UPDATE statement on the target SQL Server database

GGSCI (DCV-RORSQL-N001) 184> stats replicat repcdc

Sending STATS request to REPLICAT REPCDC ...

Start of Statistics at 2014-03-06 13:49:35.

Replicating from IDIT_PRD.AC_ACCOUNT to IDIT_PRD.AC_ACCOUNT:

*** Total statistics since 2014-03-06 13:49:21 ***
        Total inserts                                      0.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

Verify the UPDATE statement in the SQL Server database– note the value for the FREEZE_DATE column


GoldenGate Initial Load Methods Oracle source to SQL Server 2012 target

$
0
0

In this post we will look at three different methods of performing an initial data load from an Oracle 11g source database running on an HP-UX IA64 platform to a SQL Server 2012 target database hosted on Windows 2012 Datacenter.

The three methods we are using here are:

1) Oracle GoldenGate Direct Load over network without trail files
2) Oracle GoldenGate File to Replicat method
3) Oracle GoldenGate File with SQL Server BULK INSERT

These are some of the results obtained in our testing:

Initial load extract:

Between 2 and 3 million rows per minute

PRD.SH_BATCH_LOG table with 8001500 rows extracted in 4:30 minutes

PRD.AC_TRANSACTION_RACI table with 74104323 rows extracted in 21 minutes

Initial load replicat:

Between 1 to 1.5 million rows every 2 minutes

With single replicat process, table PRD.SH_BATCH_LOG with 7895001 rows took 15 minutes.

With 3 parallel replicat processes, the same 7.8 million row table was loaded in under 5 minutes. Each replicat processed about 2.6 million rows each.

3 parallel replicat processes pushed CPU utilization to around 60-70% mark but not higher.

Using 5 parallel replicat processes we were able to load a 177 million row table in little over 3 hours

The best performance obtained was using SQL Server BULK INSERT, where we were able to load 8 million rows in around 2 minutes.

 

1) Oracle GoldenGate Direct Load over network without trail files

Note – in the Direct Load method, no trail files are created – but this is not very efficient method for a large table.

GGSCI (db02) 2> edit params defgen

DEFSFILE ./dirdat/source.def,
USERID GGATE_OWNER@REDEVDB2, PASSWORD ggate
TABLE PRD.T_PRODUCT_LINE;


oracle@db02:/u01/oracle/goldengate > ./defgen paramfile /u01/oracle/goldengate/dirprm/defgen.prm

Copy source.def to ./dirdef directory on Windows 2012 server

Create the Initial Load Extract

GGSCI (db02) 5> add extract extinit1 sourceistable
EXTRACT added.


extract extinit1
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.local, MGRPORT 7809
RMTTASK REPLICAT, GROUP rinit1
TABLE PRD.T_PRODUCT_LINE;

Next create the table in the SQL Server database

Create the initial load replicat on the target SQL Server GoldenGate environment

GGSCI (DCV-RORSQL-N001) 28> add replicat repinit1 specialrun
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 29> edit params rinit1

replicat rinit1
TARGETDB sqlserver2012
SOURCEDEFS ./dirdef/source.def
MAP PRD.T_PRODUCT_LINE, TARGET PRD.T_PRODUCT_LINE;

Start the initial load extract

GGSCI (db02) 11> start extract extinit1

Sending START request to MANAGER ...
EXTRACT EXTINIT1 starting


GGSCI (db02) 12> info extract extinit1

EXTRACT    EXTINIT1  Initialized   2014-02-25 10:26   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE


GGSCI (db02) 13> info extract extinit1

EXTRACT    EXTINIT1  Last Started 2014-02-25 10:59   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.T_PRODUCT_LINE
                     2014-02-25 10:59:03  Record 1
Task                 SOURCEISTABLE


GGSCI (db02) 14> info extract extinit1

EXTRACT    EXTINIT1  Last Started 2014-02-25 10:59   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.T_PRODUCT_LINE
                     2014-02-25 10:59:06  Record 801
Task                 SOURCEISTABLE

When the Extract shows the status STOPPED, we can check the target PRD.T_PRODUCT_LINE via the SQL Server 2012 Management Studio and we find that 801 rows have been inserted in the table.

Note – in this method we do not need to start the replicat process on the target.

 

2) Oracle GoldenGate File to Replicat method

In this method we will create 3 replicat processes which will be running in parallel processing the trail files which are generated by the extract process.

The table has 74 million rows.

Create initial load extract

GGSCI (db02) 15> add extract extinit2 sourceistable
EXTRACT added.

GGSCI (db02) 3> edit params extinit2

extract extinit2
userid ggate_owner password ggate
RMTHOST DCV-RORSQL-N001.local, MGRPORT 7809,  tcpbufsize 10485760, tcpflushbytes 10485760
rmtfile ./dirdat/te, maxfiles 999999, megabytes 400, purge
reportcount every 300 seconds, rate
TABLE PRD.AC_TRANSACTION_RACI;

As in the first method, we create the definitions file using DEFGEN and then copy the generated file to the dirdef directory on the target SQL Server GoldenGate software home.

oracle@db02:/u01/oracle/goldengate > ./defgen paramfile /u01/oracle/goldengate/dirprm/defgen.prm

Create three parallel replicat groups

GGSCI (DCV-RORSQL-N001) 39> add replicat repinit2  exttrail ./dirdat/te
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 40> add replicat repinit3  exttrail ./dirdat/te
REPLICAT added.


GGSCI (DCV-RORSQL-N001) 41> add replicat repinit4  exttrail ./dirdat/te
REPLICAT added.



GGSCI (DCV-RORSQL-N001) 42> edit params repinit2


GGSCI (DCV-RORSQL-N001) 43> edit params repinit3


GGSCI (DCV-RORSQL-N001) 44> edit params repinit4


GGSCI (DCV-RORSQL-N001) 45> view params repinit2
replicat repinit2
targetdb sqlserver2012
SOURCEDEFS ./dirdef/source.def
reportcount every 60 seconds, rate
overridedups
end runtime
MAP PRD.AC_TRANSACTION_RACI, TARGET PRD.AC_TRANSACTION_RACI , filter (
@RANGE (1,3));


GGSCI (DCV-RORSQL-N001) 46> view params repinit3
replicat repinit3
targetdb sqlserver2012
SOURCEDEFS ./dirdef/source.def
reportcount every 60 seconds, rate
overridedups
end runtime
MAP PRD.AC_TRANSACTION_RACI, TARGET PRD.AC_TRANSACTION_RACI , filter (
@RANGE (2,3));


GGSCI (DCV-RORSQL-N001) 47> view params repinit4
replicat repinit4
targetdb sqlserver2012
SOURCEDEFS ./dirdef/source.def
reportcount every 60 seconds, rate
overridedups
end runtime
MAP PRD.AC_TRANSACTION_RACI, TARGET PRD.AC_TRANSACTION_RACI , filter (
@RANGE (3,3));

Start the initial load extract


GGSCI (db02) 4> start extract extinit2

Sending START request to MANAGER ...
EXTRACT EXTINIT2 starting

GGSCI (db02) 5> info extract extinit2

EXTRACT    EXTINIT2  Initialized   2014-02-25 11:27   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Not Available
                     First Record         Record 0
Task                 SOURCEISTABLE


GGSCI (db02) 6> !
info extract extinit2

EXTRACT    EXTINIT2  Last Started 2014-02-25 11:56   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.AC_TRANSACTION_RACI
                     2014-02-25 11:56:51  Record 1
Task                 SOURCEISTABLE


GGSCI (db02) 9> !
info extract extinit2

EXTRACT    EXTINIT2  Last Started 2014-02-25 11:56   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.AC_TRANSACTION_RACI
                     2014-02-25 11:57:31  Record 2312001
Task                 SOURCEISTABLE

While the initial load extract is running start the three parallel replicat processe

GGSCI (DCV-RORSQL-N001) 48> start replicat repinit2

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPINIT2 starting


GGSCI (DCV-RORSQL-N001) 49> start replicat repinit3

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPINIT3 starting


GGSCI (DCV-RORSQL-N001) 50> start replicat repinit4

Sending START request to MANAGER ('MANAGER') ...
REPLICAT REPINIT4 starting


GGSCI (DCV-RORSQL-N001) 51> info replicat repinit2

REPLICAT   REPINIT2  Last Started 2014-02-25 11:59   Status RUNNING
Checkpoint Lag       00:02:45 (updated 00:00:02 ago)
Process ID           5792
Log Read Checkpoint  File ./dirdat/te000000
                     2014-02-25 11:57:10.276246  RBA 4893789

While the 3 replicat processes are running, we can see that they are each processing almost the same number of rows and the initial load task has been distributed between the 3 parallel replicat processes

GGSCI (DCV-RORSQL-N001) 54> stats replicat repinit2 latest

Sending STATS request to REPLICAT REPINIT2 ...

Start of Statistics at 2014-02-25 12:02:46.

Replicating from PRD.AC_TRANSACTION_RACI to PRD.AC_TRANSACTION_RACI:

*** Latest statistics since 2014-02-25 11:59:45 ***
        Total inserts                                 100663.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                              100663.00

End of Statistics.


GGSCI (DCV-RORSQL-N001) 55> stats replicat repinit3 latest

Sending STATS request to REPLICAT REPINIT3 ...

Start of Statistics at 2014-02-25 12:02:56.

Replicating from PRD.AC_TRANSACTION_RACI to PRD.AC_TRANSACTION_RACI:

*** Latest statistics since 2014-02-25 11:59:45 ***
        Total inserts                                 100071.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                              100071.00

End of Statistics.


GGSCI (DCV-RORSQL-N001) 56> stats replicat repinit4 latest

Sending STATS request to REPLICAT REPINIT4 ...

Start of Statistics at 2014-02-25 12:03:02.

Replicating from PRD.AC_TRANSACTION_RACI to PRD.AC_TRANSACTION_RACI:

*** Latest statistics since 2014-02-25 11:59:47 ***
        Total inserts                                  98042.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               98042.00

End of Statistics.

We now see that the initial load extract has stopped and it has extracted 74 million rows

GGSCI (db02) 14> !
info extract extinit2

EXTRACT    EXTINIT2  Last Started 2014-02-25 11:56   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table PRD.AC_TRANSACTION_RACI
                     2014-02-25 12:17:53  Record 74104323
Task                 SOURCEISTABLE

 

3) Oracle GoldenGate File with SQL Server BULK INSERT

In this method we use the SQL Server 2012 BULK INSERT to process the text file which is generated by the GoldenGate extract process.

Create the initial load extract

Note the parameter used in the extract file – FORMATASCII, BCP

This parameter instructs Oracle GoldenGate to write the output to a text file which is compatible with the SQL Server BCP utility.

GGSCI (db02) 19> add extract extbcp sourceistable
EXTRACT added.

GGSCI (db02) 20> edit params extbcp

"/u01/oracle/goldengate/dirprm/extbcp.prm" 6 lines, 181 characters
extract extbcp
userid ggate_owner, password ggate
FORMATASCII, BCP
RMTHOST DCV-RORSQL-N001.local, MGRPORT 7809
rmtfile ./dirdat/myobjects.dat PURGE
TABLE GGATE_OWNER.MYOBJECTS;

Start the initial load extract

GGSCI (db02) 1> start extract extbcp

Sending START request to MANAGER ...
EXTRACT EXTBCP starting


GGSCI (db02) 2> info extract extbcp

EXTRACT    EXTBCP    Last Started 2014-02-25 12:37   Status RUNNING
Checkpoint Lag       Not Available
Log Read Checkpoint  Table GGATE_OWNER.MYOBJECTS
                     2014-02-25 12:37:05  Record 1
Task                 SOURCEISTABLE


GGSCI (db02) 3> !
info extract extbcp

EXTRACT    EXTBCP    Last Started 2014-02-25 12:37   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table GGATE_OWNER.MYOBJECTS
                     2014-02-25 12:37:06  Record 78165
Task                 SOURCEISTABLE

Create the initial load replicat

GGSCI (DCV-RORSQL-N001) 62> edit params repbcp


GGSCI (DCV-RORSQL-N001) 63> view params repbcp
targetdb sqlserver2012
GENLOADFILES  bcpfmt.tpl
SOURCEDEFS ./dirdef/source.def
extfile ./dirdat/myobjects.dat
assumetargetdefs
MAP GGATE_OWNER.MYOBJECTS, TARGET GGATE_OWNER.MYOBJECTS;

Start the replicat from the command line

D:\app\product\GoldenGate>replicat paramfile ./dirprm/repbcp.prm reportfile ./dirrpt/repbcp.rpt

***********************************************************************
               Oracle GoldenGate Delivery for SQL Server
Version 12.1.2.0.1 17597485 OGGCORE_12.1.2.0.T2_PLATFORMS_131206.0309
Windows x64 (optimized), Microsoft SQL Server on Dec  6 2013 12:44:54

Copyright (C) 1995, 2013, Oracle and/or its affiliates. All rights reserved.


                    Starting at 2014-02-25 12:41:14
***********************************************************************

Operating System Version:
Microsoft Windows , on x64
Version 6.2 (Build 9200: )

Process id: 1840

Description:

***********************************************************************
**            Running with the following parameters                  **
***********************************************************************

2014-02-25 12:41:14  INFO    OGG-03059  Operating system character set identified as windows-1252.

2014-02-25 12:41:14  INFO    OGG-02695  ANSI SQL parameter syntax is used for parameter parsing.

2014-02-25 12:41:15  INFO    OGG-01552  Connection String: provider=SQLNCLI11;initial catalog=PRD;data source=DCV-RORSQL-N001;persist security info=false;integrated security=sspi.

2014-02-25 12:41:15  INFO    OGG-03036  Database character set identified as windows-1252. Locale: en_US.

2014-02-25 12:41:15  INFO    OGG-03037  Session character set identified as windows-1252.

2014-02-25 12:41:15  INFO    OGG-03528  The source database character set, as dtermined from the table definition file, is UTF-8.
Using following columns in default map by name:
  object_id, object_name, object_type

File created for BCP initiation: MYOBJECTS.bat
File created for BCP format:     MYOBJECTS.fmt

Load files generated successfully.

In SQL Server 2012 Management Studio load the data into SQL Server table via the BULK INSERT command.

  bulk insert [PRD].[GGATE_OWNER].[MYOBJECTS] from 'D:\app\product\GoldenGate\dirdat\myobjects.dat'
  with(
  DATAFILETYPE = 'char',
   FIELDTERMINATOR = '\t',
ROWTERMINATOR = '0x0a'
   );


(78165 row(s) affected)

Mask sensitive data using the 12c Cloud Control Data Masking Pack

$
0
0

In this example we will see how to mask sensitive data in a table using the Data Masking Pack which is included (as a separate licensed option) in Oracle 12c Cloud Control.

We create an Application Data Model first where we define which columns are considered sensitive and are candidates for data masking and then we create data masking policies or rules which instructs Oracle how to mask or scrub the data .

We can also use masking formats which are already supplied and ready to use out-of-the-box or we can create our own masking formats which can be then stored in a masking format library for future use.

Let us take the EMP table as an example.

We have cloned the table from the production database and in our test or development database we want to mask or hide any data which we consider to be confidential or sensitive from the development team or the user testing team for example.

Our data masking requirements are this:

1)      Shuffle data in the EMP table and group it on the JOB column. So when someone selects a record for a particular employee belonging to the job category say SALESMAN, the data is masked and rows belonging to some other random employee but belonging to the same job category SALESMAN is returned instead

2)      Hide the day and month the employee joined the company but retain the year value as the application requires the original year value and not some fictitious value

3)      The salary for the job category PRESIDENT should not be revealed

Note that data masking will replace data unlike the Data Redaction feature in the 12c database where the data which is displayed or returned by a query is changed on the fly.

So we create for this exercise a table called EMP_MASK which is a copy of the EMP table owned by SCOTT.

This is the data in the table before the data masking:

 

SQL> select * from emp_mask;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7369 SMITH      CLERK           7902 17-DEC-80        800                    20
      7499 ALLEN      SALESMAN        7698 20-FEB-81       1600        300         30
      7521 WARD       SALESMAN        7698 22-FEB-81       1250        500         30
      7566 JONES      MANAGER         7839 02-APR-81       2975                    20
      7654 MARTIN     SALESMAN        7698 28-SEP-81       1250       1400         30
      7698 BLAKE      MANAGER         7839 01-MAY-81       2850                    30
      7782 CLARK      MANAGER         7839 09-JUN-81       2450                    10
      7788 SCOTT      ANALYST         7566 19-APR-87       3000                    20
      7839 KING       PRESIDENT            17-NOV-81       5000                    10
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30
      7876 ADAMS      CLERK           7788 23-MAY-87       1100                    20
      7900 JAMES      CLERK           7698 03-DEC-81        950                    30
      7902 FORD       ANALYST         7566 03-DEC-81       3000                    20
      7934 MILLER     CLERK           7782 23-JAN-82       1300                    10

14 rows selected.

After the data masking job has been run, we can see that the table data has changed according to the data masking policies which we had defined.

 

SQL> select * from emp_mask;

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7844 WARD       SALESMAN        7698 02-AUG-81       1250        500         30
      7369 MILLER     CLERK           7782 29-MAY-82       1300                    10
      7934 JAMES      CLERK           7698 27-JAN-81        950                    30
      7788 FORD       ANALYST         7566 18-DEC-81       3000                    20
      7521 ALLEN      SALESMAN        7698 01-APR-81       1600        300         30
      7654 TURNER     SALESMAN        7698 25-NOV-81       1500          0         30
      7839 KING       PRESIDENT            10-MAY-81                               10
      7698 BLAKE      MANAGER         7839 02-AUG-81       2850                    30
      7499 MARTIN     SALESMAN        7698 29-MAY-81       1250       1400         30
      7902 SCOTT      ANALYST         7566 27-JAN-87       3000                    20
      7876 SMITH      CLERK           7902 01-AUG-80        800                    20
      7566 JONES      MANAGER         7839 29-MAY-81       2975                    20
      7782 CLARK      MANAGER         7839 27-JAN-81       2450                    10
      7900 ADAMS      CLERK           7788 02-AUG-87       1100                    20

14 rows selected.

The SAL column for KING who is the PRESIDENT has a null value.

The day and month for the HIREDATE column has been changed to a random value while retaining the year.

In the pre-masked table, EMPNO 7844 had these values:

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7844 TURNER     SALESMAN        7698 08-SEP-81       1500          0         30

In the post-masked table we see that the data for the row with 7844 EMPNO has been shuffled with the original row which had the EMPNO 7521 as both these rows belonged to the job category SALESMAN

 

     EMPNO ENAME      JOB              MGR HIREDATE         SAL       COMM     DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
      7844 WARD       SALESMAN        7698 02-AUG-81       1250        500         30

Note:

The following permissions are required for Data Masking.

  • EM_ALL_OPERATOR for Enterprise Manager Cloud Control users
  • SELECT_CATALOG_ROLE for database users
  • SELECT ANY DICTIONARY privilege for database users
  • EXECUTE privileges for the DBMS_CRYPTO package

 

Let us take a look at the steps involved.

Download the note on 12c Cloud Control Data Masking …

GoldenGate change data capture and replication of BLOB and CLOB data

$
0
0

We will look at an example of GoldenGate replication of a table having a BLOB column and how an INSERT and UPDATE statement on the table with BLOB data is handled by GoldenGate.

We create an APEX 4.2 application to illustrate this example where we create a form and report based on the DOCUMENTS table and upload and download documents. We will observe how changes to the BLOB column data are replicated in real-time to the target database via GoldenGate change data capture.

Thanks to ACE Director Eddie Awad’s article which made me understand how APEX handles file upload and downloads.
Read the article by Eddie.

On the source database we create the DOCUMENTS table and a sequence and trigger to populate the primary key column ID.

CREATE TABLE documents
(
   ID              NUMBER PRIMARY KEY
  ,DOC_CONTENT    BLOB
  ,MIME_TYPE       VARCHAR2 (255)
  ,FILENAME        VARCHAR2 (255)
  ,LAST_UPDATED    DATE
  ,CHARACTER_SET   VARCHAR2 (128)
);

CREATE SEQUENCE documents_seq;

CREATE OR REPLACE TRIGGER documents_trg_bi
   BEFORE INSERT
   ON documents
   FOR EACH ROW
BEGIN
   :new.id := documents_seq.NEXTVAL;
END;
/

Create the Extract and Replicat processes

GGSCI (vindi-a) 3> add extract ext9 tranlog begin now
EXTRACT added.

GGSCI (vindi-a) 4> add rmttrail /u01/app/oracle/product/st_goldengate/dirdat/xx extract ext9
RMTTRAIL added.

GGSCI (vindi-a) 5> edit params ext9
extract ext9
CACHEMGR CACHESIZE 8G
userid gg_owner@testdb password gg_owner
DDL include ALL
ddloptions  addtrandata, report
rmthost poc-strelis-vindi, mgrport 7810
rmttrail  /u01/app/oracle/product/st_goldengate/dirdat/xx
dynamicresolution
SEQUENCE GG_OWNER.*;
TABLE GG_OWNER.DOCUMENTS;

GGSCI (vindi-a) 1> start extract ext9

Sending START request to MANAGER ...
EXTRACT EXT9 starting

GGSCI (vindi-a) 2> info extract ext9

EXTRACT    EXT9      Last Started 2014-05-22 08:43   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:04:54 ago)
Process ID           17984
Log Read Checkpoint  Oracle Redo Logs
                     2014-05-22 08:38:44  Seqno 122, RBA 9039376
                     SCN 0.0 (0)

On Target GoldenGate 

GGSCI (vindi-a) 3> add replicat rep9 exttrail /u01/app/oracle/product/st_goldengate/dirdat/xx
REPLICAT added.

GGSCI (vindi-a) 4> edit params rep9
replicat rep9
assumetargetdefs
ddlerror default ignore
userid gg_owner@strelis password gg_owner
MAP GG_OWNER.DOCUMENTS ,TARGET GG_OWNER.DOCUMENTS;

GGSCI (vindi-a) 5> start replicat rep9

Sending START request to MANAGER ...
REPLICAT REP9 starting

GGSCI (vindi-a) 6> info replicat rep9

REPLICAT   REP9      Last Started 2014-05-22 08:43   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Process ID           17919
Log Read Checkpoint  File /u01/app/oracle/product/st_goldengate/dirdat/xx000000
                     First Record  RBA 0

Since we have configured DDL replication as well we see that the DOCUMENTS table has been created on the target database as well.

oracle@vind-a:/export/home/oracle $ sqlplus gg_owner/gg_owner@targetdb

SQL*Plus: Release 11.2.0.3.0 Production on Thu May 22 08:35:55 2014

Copyright (c) 1982, 2011, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> desc documents
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 ID                                        NOT NULL NUMBER
 DOC_CONTENT                                        BLOB
 MIME_TYPE                                          VARCHAR2(255)
 FILENAME                                           VARCHAR2(255)
 LAST_UPDATED                                       DATE
 CHARACTER_SET                                      VARCHAR2(128)

We now launch APEX to create our demo application.

a1

In Application Builder click on Create

 

a2
 
Select Database
 

 

 
Click Add Page
 

 
Click Next
 

 
Accept default value
 

 
Accept default values
 

 

 
Click Create Application
 

 
Click Page 1 link
 

 
From Regions menu click Create
 

 
Select Form and click Next
 

 
Select Form on a Table or View
 

 
Select DOCUMENTS table from LOV and click Next
 

 
Enter the Page name  and click Next
 

 
Select the primary key column of the DOCUMENTS table and click Next
 

 
Primary key of the table is populated via the sequence called by the trigger
 
Select Existing trigger and click Next
 

 
Select columns to display on the form and click Next
 

 
Change the label of the Create button to Upload and hide other buttons – click Next
 

 
Select the current page as the page to branch to and click Next
 

 
Click Create
 

 
Click Edit Page
 

 
Select the P1_DOC_CONTENT item and click Edit from the menu
 

 
In the Settings section of the page add the table column names against the columns as shown above
 
Click on Apply Changes
 

 

 
Click on Run
 

 
Enter workspace or application login credentials
 

 
Click the Browse button and select the file to upload
 

 
Click Upload
 

 
In GoldenGate we check the extract and replication stats and we can see the capture and apply of the change we just made
 

GGSCI (vind-a) 3> stats extract ext9 latest

Sending STATS request to EXTRACT EXT9 ...

Start of Statistics at 2014-05-22 09:01:44.

DDL replication statistics (for all trails):

*** Total statistics since extract started     ***
        Operations                                         0.00
        Mapped operations                                  0.00
        Unmapped operations                                0.00
        Other operations                                   0.00
        Excluded operations                                0.00

Output to /u01/app/oracle/product/st_goldengate/dirdat/xx:

Extracting from GG_OWNER.DOCUMENTS_SEQ to GG_OWNER.DOCUMENTS_SEQ:

*** Latest statistics since 2014-05-22 08:59:16 ***
        Total updates                                      1.00
        Total discards                                     0.00
        Total operations                                   1.00

Extracting from GG_OWNER.DOCUMENTS to GG_OWNER.DOCUMENTS:

*** Latest statistics since 2014-05-22 08:59:16 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

End of Statistics.

 
Connect to the target database and see if the record has been inserted.
 
Note the colums like MIME_TYPE, LAST_UPDATED, FILENAME etc are automatically populated.
 

SQL> col filename format a60
SQL> col mime_type format a30
SQL> set linesize 120
SQL> select id,filename,mime_type from documents;

        ID FILENAME                                                     MIME_TYPE
---------- ------------------------------------------------------------ ------------------------------
         1 Consultant Profile - Gavin Soorma.doc                        application/msword

Note the size of the document

SQL> select id,filename,dbms_lob.getlength(doc_content)  from documents;

        ID FILENAME                                                     DBMS_LOB.GETLENGTH(DOC_CONTENT)
---------- ------------------------------------------------------------ -------------------------------
         1 Consultant Profile - Gavin Soorma.doc                                                 532992

We will now add a new page to the application. Click Create Page

 

Select Form and click Next

 

 

Select Form on a Table with Report and click Next

 

Change the Region Title to Edit Documents and click Next

 

 

Select the table and click Next

 

Give a name for the tab for the new page we are creating  and click Next

 

 

Select the columns to include in the report and click Next

 

 

Accept default and click Next

 

 

Change the Region Title and click Next

 

Select Primary key column and click Next

 

 

Select the columns to include in the form and click Next

 

 

Click Create

 

 

Click Run Page

 

 

Click the Edit icon

 

 

Click the Edit Page link at the bottom of the page

 

 

In the Settings section of the page add the table column names as shown

Click Apply Changes and then Run

We will now download the document from the table, edit the document and upload it back into the database again

 

 

Click on the Download link and save the document

 

Open the document and we will make some changes

 

 

We will delete the “Technical Skills” table from the document, save it and then upload it back again

 

 

 

Click on Browse and upload the document which we just downloaded and edited

 

 Click Apply Changes

 

Oracle GoldenGate has applied this change and we can see that the size of the document has reduced in the target database from 532992 bytes to 528896 bytes as we had deleted some lines from the document.

Connect to the target database and issue the query

Previous:

SQL> select id,filename,dbms_lob.getlength(doc_content)  from documents;

        ID FILENAME                                                     DBMS_LOB.GETLENGTH(DOC_CONTENT)
---------- ------------------------------------------------------------ -------------------------------
         1 Consultant Profile - Gavin Soorma.doc                                                 532992


Current:
SQL> select id,filename,dbms_lob.getlength(doc_content)  from documents;

        ID FILENAME                                                     DBMS_LOB.GETLENGTH(DOC_CONTENT)
---------- ------------------------------------------------------------ -------------------------------
         1 Consultant Profile - Gavin Soorma.doc                                                 528896

We can see that the Replicat process which had earlier applied the INSERT statement when the document was uploaded to the database the first time has now applied some UPDATE statements as well

GGSCI (vind-a) 1> stats replicat rep9 latest

Sending STATS request to REPLICAT REP9 ...

Start of Statistics at 2014-05-22 10:08:47.

Replicating from GG_OWNER.DOCUMENTS to GG_OWNER.DOCUMENTS:

*** Latest statistics since 2014-05-22 08:59:20 ***
        Total inserts                                      1.00
        Total updates                                      3.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   4.00

GoldenGate and Virtual Memory – CACHEMGR CACHESIZE and CACHEDIRECTORY

$
0
0

After a recent Oracle GoldenGate installation at a client site running on Solaris 11, we observed memory related errors in the GoldenGate error log like the ones mentioned below as well as extract processes were abending on start up.

ERROR OGG-01841 CACHESIZE TOO SMALL:
ERROR OGG-01843 default maximum buffer size (2097152) > absolute maximum buffer size (0)

The Oracle database alert log also seemed to be reporting quite few “ORA-04030: out of process memory” type errors as well.

The Solaris server had 64 GB of RAM but it seemed that GoldenGate was requiring 128 GB when the extract processes were started.

Let us see why and take a look also at how GoldenGate manages memory operations.

The Oracle redo log files contain both committed as well as uncommitted changes but GoldenGate only replicates committed transactions. So it needs some kind of cache where it can store the operation of each transaction until it receives a commit or rollback for that transaction. This is particularly significant for both large as well as long-running transactions.

This cache is a virtual memory pool or global cache for all the extract and replicat processes and sub-pools are allocated for each Extract log reader thread or Replicat trail reader thread as well as dedicated sub-pools for holding large data like BLOBs.

Documentation states: “While the actual amount of physical memory that is used by any Oracle GoldenGate process is controlled by the operating system, the cache manager keeps an Oracle GoldenGate process working within the soft limit of its global cache size, only allocating virtual memory on demand.

The CACHEMGR parameter controls the amount of virtual memory and temporary disk space that is available for caching uncommitted transaction data.

The CACHEMGR CACHESIZE parameter controls the virtual memory allocations and in GoldenGate versions 11.2 onwards for a 64-bit system the CACHESIZE by default is 64 GB.

While the CACHESIZE parameter controls the Virtual Memory, if that is exceeded then GoldenGate will swap data to disk temporarily and that is by default being allocated in the dirtmp sub-directory of the Oracle GoldenGate installation directory.

The dirtmp location will contain the .cm files. The cache manager assumes that all of the free space on the file system is available and will use it to create the .cm files until it becomes full. To regulate this we can use the CACHEMGR CACHEDIRECTORY parameter and provide both a size as well assign a directory location where these .cm files will be created.

So the usage of these parameters are:

CACHEMGR CACHESIZE {size}
CACHEMGR CACHEDIRECTORY {path} {size}

The CACHESIZE as mentioned earlier on 64-bit systems defaults to 64 GB and we see that 128 GB is being used because the documentation states:

The CACHESIZE value will always be a power of two, rounded down from the value of PROCESS VM AVAIL FROM OS”

So in our case we had set the extract and replicat proceses to be started automatically by the manager on restart. These processes start simultaneously so when the first extract process started it momentarily grabbed 128 GB of memory and there was no memory left for the other extract processes to start.

So we used the CACHESIZE parameter to regulate the upper limit on which machine virtual memory can be used by GoldenGate by adding this parameter to each of the extract parameter files:

CACHEMGR CACHESIZE 8G

Adding new tables to a GoldenGate Extract and using the TABLEEXCLUDE parameter

$
0
0

At one of my recent client sites, there was a requirement to enable GoldenGate Change Data Capture for a schema with over 800 tables and also application tables were frequently added which also required to have CDC enabled without the need to stop and restart the extract because we had made a change to the extract parameter file when we added for example the new table name.

Using information gathered from AWR reports and DBA_TAB_MODIFICATIONS, we were able to identify the top 20 tables which encountered the highest level of DML activity and these tables (as well as their associated child tables) were placed in 5 different extract groups.

The sixth extract would be like a ‘catch-all’ extract for all other tables as well as any new tables which were subsequently added. Rather than have to list 780 tables in the extract parameter file we instead used the TABLEEXCLUDE clause to list the 20 tables which were contained in the 5 other extract group and then finally use a TABLE [SCHEMA NAME].* to account for all other tables which had not been explicitly listed in the extract parameter files.

Since we had enabled DDL replication, when a new table was created on the source, it was automatically created at the target as well.

Here is the ‘catch-all’ extract parameter file we used and I will briefly explain some of the parameters used.

extract ext6
CACHEMGR CACHESIZE 8G
userid gg_owner@RELIS password xxxxxxxxx
exttrail ./dirdat/cc
dynamicresolution
DDL include ALL
ddloptions addtrandata, report
TABLEEXCLUDE RELIS.AUD_TRANSACTION_LOG
TABLEEXCLUDE RELIS.NEVDIS_ERROR_LOG
TABLEEXCLUDE RELIS.NEVDIS_MSG_LOG
TABLEEXCLUDE RELIS.CLI_MERGES
TABLEEXCLUDE RELIS.CLI_MERGE_CLIENTS
TABLEEXCLUDE RELIS.CLI_MERGE_OBJECTS
TABLEEXCLUDE RELIS.CLI_MERGE_CONTACTS
TABLEEXCLUDE RELIS.CLI_MERGE_ADDRESSES
TABLEEXCLUDE RELIS.ACC_ACCOUNTS
TABLEEXCLUDE RELIS.ACC_ACCOUNT_ITEMS
TABLEEXCLUDE RELIS.ACC_COLLECTION_TYPE_CODES
TABLEEXCLUDE RELIS.ACC_COLLECTIONS
TABLEEXCLUDE RELIS.ACC_PENDING_ACCOUNT_ITEMS
TABLEEXCLUDE RELIS.ACC_ADJUSTMENTS
TABLEEXCLUDE RELIS.ACC_ALLOCATIONS
TABLEEXCLUDE RELIS.ACC_ADJUSTED_ALLOCATIONS
TABLEEXCLUDE RELIS.ACC_CLEARING_DETAILS
TABLEEXCLUDE RELIS.ACC_DEBIT_NOTE_ITEMS
TABLEEXCLUDE RELIS.ACC_DEFERRED_ADJUSTMENTS
TABLEEXCLUDE RELIS.ACC_DISHONOURED_ITEMS
TABLEEXCLUDE RELIS.ACC_FER_HOLDING_DETAILS
TABLEEXCLUDE RELIS.ACC_FOLLOWUP_LETTERS
TABLEEXCLUDE RELIS.ACC_PAYMENT_OPTION_ITEMS
TABLEEXCLUDE RELIS.ACC_PENDING_ASSOCIATIONS
TABLEEXCLUDE RELIS.ACC_RECEIPTS
TABLEEXCLUDE RELIS.ACC_RECEIPT_ITEMS
TABLEEXCLUDE RELIS.ACC_ASSOCIATIONS
TABLEEXCLUDE RELIS.VEH_INSPECTION_PAYMENT
TABLEEXCLUDE RELIS.ACC_SUNDRY_CREDITOR_DETAILS
TABLEEXCLUDE RELIS.ACC_UNDER_OVER_BANKS
TABLEEXCLUDE RELIS.ACC_PAYMENT_OPTIONS
TABLEEXCLUDE RELIS.ACC_PAYMENT_ITEMS
TABLEEXCLUDE RELIS.ACC_CARDGATE
TABLEEXCLUDE RELIS.ACC_PAYMENT_DUE
TABLEEXCLUDE RELIS.VEH_REGISTERED_OWNERS
TABLEEXCLUDE RELIS.EVT_REGISTRATIONS
TABLEEXCLUDE RELIS.EVT_VEH_SANCTIONS
TABLEEXCLUDE RELIS.VEH_FLEET_VEHICLES
TABLEEXCLUDE RELIS.VEH_GARAGE_ADDRESSES
TABLEEXCLUDE RELIS.VEH_IMMOBILISERS_INSTALLED
TABLEEXCLUDE RELIS.VEH_OWNER_CONCESSIONS
TABLEEXCLUDE RELIS.VEH_OWNER_CONDITIONS
TABLEEXCLUDE RELIS.VEH_PLATE_RETURNS
TABLEEXCLUDE RELIS.VEH_PLATE_TITLES
TABLEEXCLUDE RELIS.VEH_REMINDER_SUB
TABLEEXCLUDE RELIS.VEH_SANCTION_VEHICLES
TABLE RELIS.*;

DYNAMICRESOLUTION

When the extract process starts, if there are many tables listed in the parameter file, GoldenGate has to query the database and build a metadata records for each table listed via the TABLE clause. If there are many tables involved, it can affect startup time for the extract. DYNAMICRESOLUTION causes the record to be built one table at a time, instead of all at once. The metadata of any given table is added when Extract first encounters the object ID in the transaction log, while record-building for other tables is deferred until their object IDs are encountered.

DDL INCLUDE ALL

DDL support for not only objects referenced in MAPPED clauses but DDL operations that pertain to tables that are not mapped with a TABLE or MAP statement

DDLOPTIONS ADDTRANDATA, REPORT

Enable Oracle table-level supplemental logging automatically for new tables created with a CREATE TABLE statement. It produces the same results as executing the ADD TRANDATA command in GGSCI.

Also controls whether or not expanded DDL processing information is written to the report file. The default of NOREPORT reports basic DDL statistics. REPORT adds the parameters being used along with a step-by-step history of the operations that were processed as part of the DDL capture

CACHEMGR CACHESIZE 8G

See for more information on CACHEMGR and CACHESIZEparameter

Viewing all 232 articles
Browse latest View live