Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

12c Multitenancy Backup and Recovery

$
0
0

Here are a few examples of backup and recovery in an Oracle 12c multitenant environment with Container and Pluggable databases involved.

The first thing to keep in mind is the structure of a 12c Container and Pluggable database. There is only one set of control files and redo log files and that is at the container level. So the same principle applies to the archived redo log files as well.

Individual pluggable databases do not have redo log files or control files – but they can have individual temporary tablespace tempfiles.

Backup can be performed at the container level.

Single RMAN backup database command will backup the root container database as well as all the pluggable databases.

RMAN> backup database;

Starting backup at 02-APR-14
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=31 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00003 name=/u01/app/oracle/oradata/condb1/sysaux01.dbf
input datafile file number=00001 name=/u01/app/oracle/oradata/condb1/system01.dbf
input datafile file number=00004 name=/u01/app/oracle/oradata/condb1/undotbs01.dbf
input datafile file number=00016 name=/u01/app/oracle/oradata/condb1/ggs_data01.dbf
input datafile file number=00006 name=/u01/app/oracle/oradata/condb1/users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:45
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00009 name=/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
input datafile file number=00011 name=/u01/app/oracle/oradata/condb1/pdb1/example01.dbf
input datafile file number=00008 name=/u01/app/oracle/oradata/condb1/pdb1/system01.dbf
input datafile file number=00010 name=/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u01/app/oracle/oradata/condb1/pdb2/sysaux01.dbf
input datafile file number=00015 name=/u01/app/oracle/oradata/condb1/pdb2/example01.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/condb1/pdb2/system01.dbf
input datafile file number=00014 name=/u01/app/oracle/oradata/condb1/pdb2/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EAA0B10062AA3A41E0438D15060AC71B/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpor93z_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=/u01/app/oracle/oradata/condb1/pdbseed/sysaux01.dbf
input datafile file number=00005 name=/u01/app/oracle/oradata/condb1/pdbseed/system01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA792426F4762CDBE0438D15060A3359/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpos2bl_.bkp tag=TAG20140402T081602 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 02-APR-14

Starting Control File and SPFILE Autobackup at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843812283_9mposw73_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 02-APR-14

Backups can also be performed at the pluggable database level. Note control file which is backed up in this case is at the container database level.

In this we have connected via RMAN to the container database and are backing up one of the pluggable databases.

RMAN> backup pluggable  database pdb2;

Starting backup at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00013 name=/u01/app/oracle/oradata/condb1/pdb2/sysaux01.dbf
input datafile file number=00015 name=/u01/app/oracle/oradata/condb1/pdb2/example01.dbf
input datafile file number=00012 name=/u01/app/oracle/oradata/condb1/pdb2/system01.dbf
input datafile file number=00014 name=/u01/app/oracle/oradata/condb1/pdb2/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: starting piece 1 at 02-APR-14
channel ORA_DISK_1: finished piece 1 at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EAA0B10062AA3A41E0438D15060AC71B/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T120428_9mq32f70_.bkp tag=TAG20140402T120428 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:25
Finished backup at 02-APR-14

Starting Control File and SPFILE Autobackup at 02-APR-14
piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843825894_9mq336jk_.bkp comment=NONE
Finished Control File and SPFILE Autobackup at 02-APR-14

We can also use RMAN to connect to an individual pluggable database instead of the container database.

$ rman target sys/syspasswd@pdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 08:48:30 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> list backup of database;

using target database control file instead of recovery catalog

List of Backup Sets
===================


BS Key  Type LV Size       Device Type Elapsed Time Completion Time
------- ---- -- ---------- ----------- ------------ ---------------
5       Full    905.24M    DISK        00:00:22     02-APR-14
        BP Key: 5   Status: AVAILABLE  Compressed: NO  Tag: TAG20140402T081602
        Piece Name: /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
  List of Datafiles in backup set 5
  File LV Type Ckp SCN    Ckp Time  Name
  ---- -- ---- ---------- --------- ----
  8       Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/system01.dbf
  9       Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
  10      Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
  11      Full 8195981    02-APR-14 /u01/app/oracle/oradata/condb1/pdb1/example01.dbf

Loss of Tempfile at pluggable database level

Temp file is automatically created when the pluggable database is closed and opened.

SQL> select name from v$tempfile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf

SQL> !rm /u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf


SQL> conn sys as sysdba
Enter password:
Connected.
SQL> alter pluggable database pdb1 close immediate;

Pluggable database altered.

SQL> alter pluggable database pdb1 open read write;

Pluggable database altered.

SQL> !ls /u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf
/u01/app/oracle/oradata/condb1/pdb1/pdb1_temp01.dbf

Loss of Non-System data file at pluggable database level

Online recovery of SYSAUX tablespace

SQL> conn sys/syspasswd@pdb1 as sysdba
Connected.

SQL>  select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/u01/app/oracle/oradata/condb1/undotbs01.dbf
/u01/app/oracle/oradata/condb1/pdb1/system01.dbf
/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
/u01/app/oracle/oradata/condb1/pdb1/example01.dbf

SQL> !rm /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf

SQL> alter tablespace sysaux offline;

Tablespace altered.

$ rman target sys/syspasswd@pdb1

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 10:31:30 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> restore tablespace sysaux;

Starting restore at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=274 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 02-APR-14

RMAN> recover tablespace sysaux;

Starting recover at 02-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:00

Finished recover at 02-APR-14

RMAN> alter tablespace sysaux online;

Statement processed

Loss of SYSTEM tablespace datafile at pluggable database level

Note – online pluggable database recovery cannot be performed.

The entire container database has to be shut down and mounted and pluggable database recovered


SQL> !rm /u01/app/oracle/oradata/condb1/pdb1/system01.dbf


SQL> alter session set container=pdb1;

Session altered.


SQL> shutdown abort
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/app/oracle/oradata/condb1/pdb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> alter pluggable database pdb1 close;
alter pluggable database pdb1 close
*
ERROR at line 1:
ORA-01116: error in opening database file 8
ORA-01110: data file 8: '/u01/app/oracle/oradata/condb1/pdb1/system01.dbf'
ORA-27041: unable to open file

To recover the pluggable database we need to connect to the container database, shutdown the container database (this will shutdown all other pluggable databases) , mount the container database and then recover the pluggable database.


SQL> shutdown abort
ORACLE instance shut down.
SQL> startup mount;
ORACLE instance started.

Total System Global Area 2137886720 bytes
Fixed Size                  2290416 bytes
Variable Size            1207962896 bytes
Database Buffers          922746880 bytes
Redo Buffers                4886528 bytes
Database mounted.

RMAN> restore tablespace pdb1:system;

Starting restore at 02-APR-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/condb1/pdb1/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:07
Finished restore at 02-APR-14

RMAN> recover tablespace pdb1:system;

Starting recover at 02-APR-14
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 02-APR-14

RMAN> alter database open;

Statement processed


RMAN> alter pluggable database all open read write;

Statement processed

Loss of SYSTEM data file at the Container database level

Note – If we lose the container database SYSTEM datafile we cannot connect to any of the pluggable databases as well.

We have to shutdown abort the container database and then mount the container database and perform offline recovery of the SYSTEM tablespace


SQL> !rm /u01/app/oracle/oradata/condb1/system01.dbf

SQL> alter system flush buffer_cache;

System altered.

SQL> select count(*) from dba_objects;
select count(*) from dba_objects
                     *
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '/u01/app/oracle/oradata/condb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3


SQL> alter session set container=pdb1;
ERROR:
ORA-00604: error occurred at recursive SQL level 1
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '/u01/app/oracle/oradata/condb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00604: error occurred at recursive SQL level 2
ORA-01116: error in opening database file 1
ORA-01110: data file 1: '/u01/app/oracle/oradata/condb1/system01.dbf'
ORA-27041: unable to open file
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3

SQL> shutdown abort
ORACLE instance shut down.

$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 10:43:29 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database (not started)

RMAN> startup mount;

Oracle instance started
database mounted

Total System Global Area    2137886720 bytes

Fixed Size                     2290416 bytes
Variable Size               1207962896 bytes
Database Buffers             922746880 bytes
Redo Buffers                   4886528 bytes

RMAN> restore tablespace system;

Starting restore at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=11 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/condb1/system01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:15
Finished restore at 02-APR-14

RMAN> recover tablespace system;

Starting recover at 02-APR-14
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 175 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc
archived log for thread 1 with sequence 176 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_176_9mpxgpnl_.arc
archived log for thread 1 with sequence 177 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_177_9mpy4lj5_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc thread=1 sequence=175
media recovery complete, elapsed time: 00:00:01
Finished recover at 02-APR-14


RMAN> alter database open;

Statement processed

RMAN>


RMAN> alter pluggable database all open read write;

Statement processed

Point-in-time Recovery of Pluggable Database

Note – an auxiliary instance is created to perform the point in time recovery of the pluggable database.

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
    8388302


SQL> conn sh/sh@pdb1
Connected.

SQL> truncate table sales;

Table truncated.


$ rman target /

Recovery Manager: Release 12.1.0.1.0 - Production on Wed Apr 2 11:31:37 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> run {
2> set until scn  8388302;
3> restore pluggable database pdb1;
4> recover pluggable database pdb1;
5> alter pluggable database pdb1 open resetlogs;
6> }
executing command: SET until clause

Starting restore at 02-APR-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=29 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00008 to /u01/app/oracle/oradata/condb1/pdb1/system01.dbf
channel ORA_DISK_1: restoring datafile 00009 to /u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00010 to /u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf
channel ORA_DISK_1: restoring datafile 00011 to /u01/app/oracle/oradata/condb1/pdb1/example01.dbf
channel ORA_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp
channel ORA_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/EA795F28CCF12888E0438D15060AAF42/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpoqhvh_.bkp tag=TAG20140402T081602
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:25
Finished restore at 02-APR-14

Starting recover at 02-APR-14
current log archived
using channel ORA_DISK_1
RMAN-05026: WARNING: presuming following set of tablespaces applies to specified Point-in-Time

List of tablespaces expected to have UNDO segments
Tablespace SYSTEM
Tablespace UNDOTBS1

Creating automatic instance, with SID='zpDF'

initialization parameters used for automatic instance:
db_name=CONDB1
db_unique_name=zpDF_pitr_pdb1_CONDB1
compatible=12.1.0.0.0
db_block_size=8192
db_files=200
sga_target=1G
processes=80
diagnostic_dest=/u01/app/oracle
#No auxiliary destination in use
enable_pluggable_database=true
_clone_one_pdb_recovery=true
control_files=/u01/app/oracle/fast_recovery_area/CONDB1/controlfile/o1_mf_9mq17sor_.ctl
#No auxiliary parameter file used


starting up automatic instance CONDB1

Oracle instance started

Total System Global Area    1068937216 bytes

Fixed Size                     2296576 bytes
Variable Size                281019648 bytes
Database Buffers             780140544 bytes
Redo Buffers                   5480448 bytes
Automatic instance created

contents of Memory Script:
{
# set requested point in time
set until  scn 8388302;
# restore the controlfile
restore clone controlfile;
# mount the controlfile
sql clone 'alter database mount clone database';
}
executing Memory Script

executing command: SET until clause

Starting restore at 02-APR-14
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=75 device type=DISK

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: restoring control file
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843822011_9mpz9w6k_.bkp
channel ORA_AUX_DISK_1: piece handle=/u01/app/oracle/fast_recovery_area/CONDB1/autobackup/2014_04_02/o1_mf_s_843822011_9mpz9w6k_.bkp tag=TAG20140402T110011
channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/fast_recovery_area/CONDB1/controlfile/o1_mf_9mq17sor_.ctl
Finished restore at 02-APR-14

sql statement: alter database mount clone database

contents of Memory Script:
{
# set requested point in time
set until  scn 8388302;
# switch to valid datafilecopies
switch clone datafile  8 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/system01.dbf";
switch clone datafile  9 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf";
switch clone datafile  10 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf";
switch clone datafile  11 to datafilecopy
 "/u01/app/oracle/oradata/condb1/pdb1/example01.dbf";
# set destinations for recovery set and auxiliary set datafiles
set newname for datafile  1 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_system_9mq181xx_.dbf";
set newname for datafile  4 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_undotbs1_9mq181yx_.dbf";
set newname for datafile  3 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf";
set newname for datafile  6 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_users_9mq188rk_.dbf";
set newname for datafile  16 to
 "/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_ggs_data_9mq188rq_.dbf";
# restore the tablespaces in the recovery set and the auxiliary set
restore clone datafile  1, 4, 3, 6, 16;
switch clone datafile all;
}
executing Memory Script

executing command: SET until clause

datafile 8 switched to datafile copy
input datafile copy RECID=7 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/system01.dbf

datafile 9 switched to datafile copy
input datafile copy RECID=8 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/sysaux01.dbf

datafile 10 switched to datafile copy
input datafile copy RECID=9 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/SAMPLE_SCHEMA_users01.dbf

datafile 11 switched to datafile copy
input datafile copy RECID=10 STAMP=843824010 file name=/u01/app/oracle/oradata/condb1/pdb1/example01.dbf

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 02-APR-14
using channel ORA_AUX_DISK_1

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_system_9mq181xx_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00004 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_undotbs1_9mq181yx_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00003 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00006 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_users_9mq188rk_.dbf
channel ORA_AUX_DISK_1: restoring datafile 00016 to /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_ggs_data_9mq188rq_.dbf
channel ORA_AUX_DISK_1: reading from backup piece /u01/app/oracle/fast_recovery_area/CONDB1/backupset/2014_04_02/o1_mf_nnndf_TAG20140402T081602_9mpop2my_.bkp

channel ORA_AUX_DISK_1: restored backup piece 1
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:55
Finished restore at 02-APR-14

datafile 1 switched to datafile copy
input datafile copy RECID=16 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_system_9mq181xx_.dbf
datafile 4 switched to datafile copy
input datafile copy RECID=17 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_undotbs1_9mq181yx_.dbf
datafile 3 switched to datafile copy
input datafile copy RECID=18 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf
datafile 6 switched to datafile copy
input datafile copy RECID=19 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_users_9mq188rk_.dbf
datafile 16 switched to datafile copy
input datafile copy RECID=20 STAMP=843824065 file name=/u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_ggs_data_9mq188rq_.dbf

contents of Memory Script:
{
# set requested point in time
set until  scn 8388302;
# online the datafiles restored or switched
sql clone "alter database datafile  1 online";
sql clone "alter database datafile  4 online";
sql clone "alter database datafile  3 online";
sql clone 'PDB1' "alter database datafile
 8 online";
sql clone 'PDB1' "alter database datafile
 9 online";
sql clone 'PDB1' "alter database datafile
 10 online";
sql clone 'PDB1' "alter database datafile
 11 online";
sql clone "alter database datafile  6 online";
sql clone "alter database datafile  16 online";
# recover pdb
recover clone database tablespace  "SYSTEM", "UNDOTBS1", "SYSAUX", "USERS", "GGS_DATA" pluggable database
 'PDB1'   delete archivelog;
sql clone 'alter database open read only';
plsql <<>>;
plsql <<>>;
# shutdown clone before import
shutdown clone abort
plsql <<  'PDB1');
end; >>>;
}
executing Memory Script

executing command: SET until clause

sql statement: alter database datafile  1 online

sql statement: alter database datafile  4 online

sql statement: alter database datafile  3 online

sql statement: alter database datafile  8 online

sql statement: alter database datafile  9 online

sql statement: alter database datafile  10 online

sql statement: alter database datafile  11 online

sql statement: alter database datafile  6 online

sql statement: alter database datafile  16 online

Starting recover at 02-APR-14
using channel ORA_AUX_DISK_1

starting media recovery

archived log for thread 1 with sequence 175 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc
archived log for thread 1 with sequence 176 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_176_9mpxgpnl_.arc
archived log for thread 1 with sequence 177 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_177_9mpy4lj5_.arc
archived log for thread 1 with sequence 178 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_178_9mpyf1gy_.arc
archived log for thread 1 with sequence 179 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_179_9mpys08g_.arc
archived log for thread 1 with sequence 180 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_180_9mpzn0jl_.arc
archived log for thread 1 with sequence 181 is already on disk as file /u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_181_9mq17sc3_.arc
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_175_9mpqqpvv_.arc thread=1 sequence=175
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_176_9mpxgpnl_.arc thread=1 sequence=176
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_177_9mpy4lj5_.arc thread=1 sequence=177
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_178_9mpyf1gy_.arc thread=1 sequence=178
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_179_9mpys08g_.arc thread=1 sequence=179
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_180_9mpzn0jl_.arc thread=1 sequence=180
archived log file name=/u01/app/oracle/fast_recovery_area/CONDB1/archivelog/2014_04_02/o1_mf_1_181_9mq17sc3_.arc thread=1 sequence=181
media recovery complete, elapsed time: 00:00:05
Finished recover at 02-APR-14

sql statement: alter database open read only



Oracle instance shut down


Removing automatic instance
Automatic instance removed
auxiliary instance file /u01/app/oracle/fast_recovery_area/CONDB1/datafile/o1_mf_sysaux_9mq182cw_.dbf deleted
auxiliary instance file /u01/app/oracle/fast_recovery_area/CONDB1/controlfile/o1_mf_9mq17sor_.ctl deleted
Finished recover at 02-APR-14

Statement processed

RMAN>

SQL> conn sh/sh@pdb1
Connected.

SQL> select count(*) from sales;

  COUNT(*)
----------
    918843


SQL> select con_id,  DB_INCARNATION#, PDB_INCARNATION# , INCARNATION_TIME from v$pdb_incarnation order by con_id;

    CON_ID DB_INCARNATION# PDB_INCARNATION# INCARNATI
---------- --------------- ---------------- ---------
         3               2                0 06-NOV-13
         3               2                1 02-APR-14

SQL> conn / as sysdba
Connected.

SQL> /

    CON_ID DB_INCARNATION# PDB_INCARNATION# INCARNATI
---------- --------------- ---------------- ---------
         1               2                0 06-NOV-13
         1               1                0 24-MAY-13
         2               2                0 06-NOV-13
         2               1                0 24-MAY-13
         3               2                1 02-APR-14
         3               2                0 06-NOV-13
         4               2                0 06-NOV-13
         4               1                0 24-MAY-13

8 rows selected.

Note – FLASHBACK DATABASE has to be enabled at the Container database level – cannot be performed at the pluggable database level.

SQL> alter database flashback on;
alter database flashback on
*
ERROR at line 1:
ORA-03001: unimplemented feature


SQL> conn / as sysdba
Connected.
SQL> alter database flashback on;

Database altered.

SQL>  select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

Flashback database also cannot be performed at the pluggable database level.

SQL> flashback database pdb3 to scn  26541795;
flashback database pdb3 to scn  26541795
*
ERROR at line 1:
ORA-65040: operation not allowed from within a pluggable database


Database as a Service (DBaaS) using Enterprise Manager 12c Cloud Control

$
0
0

One of the challenges faced today in the database provisioning process is time to deliver.

Consider a normal workflow in this case.

A developer requests a database. It then goes to his manager for approval.Once that stage is passed the DBA comes into the picture and the DBA in turn will contact the Storage and System Admin to request the hardware and storage. There could be OS and Network setup involved as well and then finally the DBA will create the database and then provide access to the developer.

Typically this can take days and sometimes weeks as well – using DBaaS with EM12c we reduce the time to deliver to minutes!

Not only is this time consuming and inefficient but this very process has caused the problems we face today around database and server sprawl, multiple versions and patch levels, high cost of deployment and operation, poor resource utilization – the current state of most database deployments is largely siloed and complex.

In this example we see a use case of DBaaS where a Developer is able to create an Oracle 12c pluggable database without any intervention of the DBA by submitting a Service Request using a Service Catalog- but the DBA still controls what the developers can and cannot do.

We will introduce DBaaS concepts like Platform as a Service (PaaS) Infrastructure Zones, Database Resource Pools, Service Templates, Quotas, Service Catalogs and finally an example of Chargeback and Metering as well.

We have created two users for the example – CLOUD_ADMIN and DEV_SSA and a CLOUD_SSA Role as well.

Connect as the CLOUD_ADMIN user
 
From the Setup -Cloud -PaaS Infrastructure Zones menu
 

 
A PaaS Infrastructure Zone is a collection of Host targets or an OVM zone. Hosts are targets identified in Enterprise Manager and a single host can be a member of only one PaaS Zone.

We also associate the SSA role we created earlier so as to regulate access to the PaaS Infrastructure Zone.

We also have to provide the host credentials of the members the PaaS Zone is comprised of.

Placement policy constraints can be used to specify a ceiling or limit on the amount of resources any host in the zone can use

 
em2
 

em19

em18

em17

em16

em15

em14

em13

em12

 
The next step is to create a Database Resource Pool – this will be based on databases and Oracle Homes existing on the hosts that make up the PaaS Infrastructure Zone.

A Database Pool is a collection of resources (homogenous databases and Oracle Homes) to provision a database instance within a PaaS Infrastructure Zone – so basically the same platform, type and version.

We can create a Database Pool for Database as a Service, Schema as a Service or PDB as a service.

Also at the Database Resource Pool level we can specify some placement constraints to control resource utilization.

Through the use of Quotas we can also control and limit resource utilization at the SSA user or role level.

From the SetupCloudDatabases menu
 

em11

em10

em9

em8

em7

em6

em5

em4

em3

 

em20

 

em21

 

em22

em23
 
 

The services that are offered to SSA Cloud users are defined via Service Templates. A Service Template can be also based on a database provisioning Profile.

In this example we are creating a service template for provisioning an EMPTY PLUGGABLE DATABASE.

The PaaS Infrastructure Zone earlier created is associated with the Service Template and we also assign the Database Resource Pool as well. In this way we are regulating which hosts or OVM zones can be used by a particular SSA user as well as the type of database that can be created via the Self Service Portal.

The Service Template can also define other things like the number of tablespaces which will be created in the pluggable database, the total size of the PDB, init.ora parameters and other resource limits like number of database sessions etc.

em24

em25

em27

em28

em30

em31

em32

em33

em34

em35

em36
 
 

Next connect as the DEV_SSA Cloud SSA user

From the Cloud menu select Self Service Portal

The DEV_SSA user will request the creation of a 12c Pluggable Database from the Service Catalog available via the Self Service Portal.

Note that the DEV_SSA user has been provided in this example a limit of just one single service request and a quota of 1 GB of RAM and 1 GB of disk space and this quota is exhausted when he submits the service request for the creation of the pluggable database.

After submitting the service request the user can track the progress of the request.

After the request completes a Pluggable Database has been created and is now available for use by the developer.

So in this case a developer has created a 12c Pluggable Database with no involvement of the DBA or the System Administrator or Storage Administrator in a few minutes. The DBA or Cloud Administrator still controls which physical host the database is created on ,which Container Database the pluggable database is a tenant of, how much resources can be allocated in terms of disk space, RAM and CPU.

This is EM12c enabling Database as a Service!

em37

em38

em39

em40

 

em41

em42

em43

em44

em45

em46
 
 

Let us take a brief look at how Chargeback and Metering work in a DBaaS model.

Simply put,based on resource usage charges are calculated and then allocated or metered to individual cost centers who are consumers of the Database as a Service offering.

In this example we create a charge plan for the Pluggable Database entity and the metric being used to calculate the charges is Uptime of the database.

The user is being charged a rate of 10$ per hour of database uptime.
 

em47

em48

em49

em50

em51

em52

em53

em54

em55

em56

em57

em58

em59

em60

em61

em62

em63
 
 
The SSA user can also view a number of reports of the Metering and Charges and there is an Enterprise Manager job which runs once in a 24 hour period collecting such data which is displayed in the reports.

In this case we are forcing the manual collection of data and once the job completes we can see a number of graphical reports on Chargeback data showing the usage or consumption and amount which is payable for usage of the service and the same is displayed on a daily basis as well in case the user wishes to get a day by day breakdown of the Chargeback.
 
em64

em65

em66

 

em67
 
em68

Database Cloning and Provisioning using EM12c DBaaS

$
0
0

Let us look at a common requirement the DBA faces on a regular basis related to performing a database clone.

A developer needs a copy or clone of the production database to test some urgent fixes and requires this database for a short period of time to perform the testing – after which this database can be dropped. There may be further requirement to blow the database away and have another copy of production data in case another round of further testing is required.

We all know how long in a normal case it will take to fulfill this requirement – days and maybe even weeks – and the amount of work involved to provision this database.

Now with Database as a Service (DBaaS) being offered via EM12c, the developer can get his clone of the production database created when he wants with a click of a button using the Cloud Self Service Portal. The DBA does not need to go and perform a backup of the database or be involved in any way.

But at the same time the DBA has total control over the service which is offered to the developer – which host the clone will get created on, the size of the database, disk space it can occupy and even init.ora parameters which will be used to create the cloned database among other things.

In this example we have a source 12.1.0.2.’production’ database – db12c – and we will be creating a provisioning profile to clone this database on a target development host. After a week of use the database will be automatically dropped and the developer has the option of requesting creation of another clone in case more testing is required.

We will base the clone on an existing RMAN backup – note that we need to copy the RMAN backupset to the target server where the clone is being created.

In this case we have a Platform as a Service (PaaS) Infrastructure Zone called development_zone and that zone has two member hosts.Let us say one of the hosts is the production host where the source database is located and the other host is the development or test host where the clone will be created.

To see the details of the PaaS Infrastructure Zone highlight the zone and click on the Edit button.

cl1

cl2

We now create a Database Pool in the PaaS zone – the Database Resource Pool will be a non-CDB based pool.

cl3

We enter the details of the PaaS Infrastructure Zone and the OS platform and based on the database version (in this case 12.1.0.2) the available Oracle Homes located on both hosts which make up the PaaS zone are displayed.

cl4

cl5

cl6

In this case the database will be made available only for a period of 7 days so we specify the retention duration under the Request Settings.

cl7

In the Quotas section we limit the number of database requests the SSA user can make as well as the total amount of machine RAM and disk space which will be made available to the user. This is being regulated via the CLOUD_SSA role.

cl8

Next is that we create a Database Provisioning Profile. In the Profiles section click on Create.

cl9

Select the reference or source database on which the profile will be based.

cl10

A clone of the reference database (db12c) will be created using an existing RMAN backup.

cl11

cl12

cl13

cl14

cl15

After the profile is created we now have to create a Service Template based on that profile.

cl16

cl20

Note that the Shared Location on the Reference Host (which is the host where the clone is being created) is the location where we have copied the RMAN backup sets. This is the RMAN backup of the source database on which the clone is going to be based.

cl21

cl22

The location of the data files can be defined as well as the prefix which will be used for the database name. Other things like listener port and passwords can also be defined.

cl23

Here we can change some of the init.ora parameters if required – for example we may want to ensure that the clone database is allocated a smaller SGA or PGA than the source production database or change some other parameter related to performance or availability.

cl24

cl25

cl26

cl27

cl28

 
Let us now connect as the Cloud Self Service user.
 
The user requests the creation of a database and selects the Service Template which is available to him.
cl29

cl30

cl31

cl32

cl33

After submitting the request the user can monitor the progress of the request
cl34

cl35

After the database has been created we can see the instance dev00000 up and running and can also see the amount of resources like RAM and Disk Space the user has used so far and how much is still available for future requests.

cl36

cl37

cl38

PSU Patch Deployment using EM12c

$
0
0

The Patch Deployment feature in EM12c can greatly help in automating the rolling out of patches when we have to deploy the patch on a large number of targets – this significantly reduces both the time and complexity involved in the process.

Let us look at an example of deploying the JAN 2015 PSU patch using EM12c.

We first need to upload the JAN 2015 PSU patch 19769480 as well as the opatch version 12.1.0.1.6 to the EM12c Software Library – as we are operating in Offline Patching mode.

Note that we have to upload the Patch Metadata file as well along with the patch

p1

 

p3 p4

 
Click on 19769480 link in the Patch Name column
 

p5
 

We will add the patch to a New Patch Plan
 
p6
 
We can deploy the PSU patch on all available hosts with 12.1.0.2 Oracle databases – in this example we are deploying the PSU to just a single host.
 
p7
 
p8
 
After the patch plan has been created we edit the patch plan via the Patches & Updates menu
 

p9
 
p10
 
The Patch deployment can be In Place or Out of Place. In this case we will be applying the PSU patch to the existing Oracle Home. Provide the required Normal and Privileged Credentials and validate the same.
 

p11
 
p12
 
We then need to run an Analyze of the PSU patch. The patch is staged and checks are performed to determine if all the patch prerequisites are met.
 

p13
 
p14
 
p15
 
p16
 
p17
 
After the patch has been successfully analyzed, we can see that it is now ready for deployment.
 

p18
 
Review the patch plan and then click on the Deploy button
 

p19
 
Since the PSU patch deployment will require a database outage we can schedule the patch to be deployed at a specfic time or it can be deployed to start immediately.
 

p20
 
p21
 
While the patch deployment is in progress we can view the different actions being performed at each step and have very good visibility of the patch application as it proceeds.
 

p22
 
p23
 
p24
 
p25
 
p26
 
After the PSU patch application we can now see that the Post SQL script is being applied while the database has been started in Upgrade Mode.
 

p27
 
A Blackout is created automatically as patch of the patch deployment and Blackout is cleared as one of the last steps in the patch deployment plan.
 

p28
 
If we select the relevant 12.1.0.2 Oracle Home in the Targets menu, we can see under the Patches Applied tab that patch 19769480 has been successfully applied and we also can see the various bugs which have been fixed by this PSU patch.
 

p29

12.1.0.2 Multitenant Database New Features

$
0
0

Heres a quick look at some of the new features introduced in 12.1.0.2 around Pluggable and Container databases.

PDB CONTAINERS Clause

Using the CONTAINERS clause, from the root container we can issue a query which selects or aggregates data across multiple pluggable databases.

For example each pluggable data can contain data for a specific geographic region and we can issue a query from the root container database which will aggregate the data obtained from all individual regions.

The requirement is that we have to create an empty table in the root container with just the structure of the tables contained in the PDB’s.

In this example we have a table called MYOBJECTS and the pluggable databases are DEV1 and DEV2.

Each pluggable database has its own copy of the MYOBJECTS table.

We have a common user C##USER who owns the MYOBJECTS table in all the pluggable databases.


SQL> alter session set container=dev1;

Session altered.

SQL> select count(*) from myobjects
  2  where object_type='TABLE';

  COUNT(*)
----------
      2387

SQL> alter session set container=dev2;

Session altered.

SQL> select count(*) from myobjects
  2  where object_type='TABLE';

  COUNT(*)
----------
      2350


Now connect to the root container. We are able to issue a query which aggregates data from both Pluggable databases – DEV1 and DEV2.

Note the root container has a table also called MYOBJECTS – but with no rows.



SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> select con_id, name from v$pdbs;

    CON_ID NAME
---------- ------------------------------
         2 PDB$SEED
         3 DEV1
         4 DEV2


SQL> select count(*) from myobjects;

  COUNT(*)
----------
         0


SQL> select count(*) from containers ( myobjects)
  2  where object_type='TABLE'
  3  and con_id in (3,4);

  COUNT(*)
----------
      4737

PDB Subset Cloning

12.1.0.2 extends database cloning where we can just clone a subset of a source database. The USER_TABLESPACES clause allows us to specify which tablespaces need to be available in the new cloned pluggable database

In this example the source Pluggable Database (DEV1) has two tablespaces with application data located in two tablespaces – USERS and TEST_DATA.

The requirement is to create a clone of the DEV1 pluggable database, but the target database only requires the tables contained in the TEST_DATA tablespace.

This would be useful in a case where we are migrating data from a non-CDB database which contains multiple schemas and we perform some kind of schema consolidation where each schema is self-contained in its own pluggable database.

Note the MYOBJECTS table is contained in the USERS tablespace and we are creating a new tablespace TEST_DATA which will contain the MYTABLES table. The cloned database only requires the TEST_DATA tablespace

SQL> alter session set container=dev1;

Session altered.


SQL> select tablespace_name from dba_tables where table_name='MYOBJECTS';

TABLESPACE_NAME
------------------------------
USERS

SQL> select count(*) from system.myobjects;

  COUNT(*)
----------
     90922


SQL> create tablespace test_data
  2  datafile
  3  '/oradata/cdb1/dev1/dev1_test_data01.dbf'
  4  size 50m;

Tablespace created.

SQL> create table system.mytables
  2  tablespace test_data
  3  as select * from dba_tables;

Table created.

SQL> select file_name, tablespace_name from dba_data_files;

FILE_NAME                                TABLESPACE_NAME
---------------------------------------- ------------------------------
/oradata/cdb1/dev1/system01.dbf          SYSTEM
/oradata/cdb1/dev1/sysaux01.dbf          SYSAUX
/oradata/cdb1/dev1/dev1_users01.dbf      USERS
/oradata/cdb1/dev1/dev1_test_data01.dbf  TEST_DATA

We now will create the clone database – DEV3 using DEV1 as the source. Note the USER_TABLESPACES clause which defines the tablespaces which we want to be part of the cloned pluggable database.


SQL> ! mkdir /oradata/cdb1/dev3/

SQL> conn / as sysdba
Connected.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> CREATE PLUGGABLE DATABASE dev3 FROM dev1
FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev3/')
USER_TABLESPACES=('TEST_DATA')  ;

Pluggable database created.

SQL> alter pluggable database dev3 open;

Pluggable database altered.

If we connect to the DEV3 database we can see a list of the data files which the PDB comprises of .

We can see that the datafile which belongs to the USERS tablespace has the MISSING keyword included in its name. While wed can now select from tables which were contained in the TEST_DATA tablespace on source (like MYTABLES), we cannot access tables (obviously) which existed in other tablespaces which were not part of the USER_TABLESPACES clause of the CREATE PLUGGABLE DATABASE command like MYOBJECTS.

To clean up the database we can now drop the other tablespaces like USERS which are not required in the cloned database.


SQL> alter session set container=dev3;

Session altered.


select file_name, tablespace_name from dba_data_files;


FILE_NAME                                                              TABLESPACE_NAME
---------------------------------------------------------------------- ------------------------------
/oradata/cdb1/dev3/system01.dbf                                        SYSTEM
/oradata/cdb1/dev3/sysaux01.dbf                                        SYSAUX
/u01/app/oracle/product/12.1.0.2/dbs/MISSING00017                      USERS
/oradata/cdb1/dev3/dev1_test_data01.dbf                                TEST_DATA


SQL> select count(*) from system.mytables;

  COUNT(*)
----------
      2339

SQL> select count(*) from system.myobjects;
select count(*) from system.myobjects
                            *
ERROR at line 1:
ORA-00376: file 21 cannot be read at this time
ORA-01111: name for data file 21 is unknown - rename to correct file
ORA-01110: data file 21: '/u01/app/oracle/product/12.1.0.2/dbs/MISSING00021'



SQL> alter database default tablespace test_data;

Database altered.

SQL> drop tablespace users including contents and datafiles;

Tablespace dropped.

PDB Metadata Clone

There is an option to also create a clone of a pluggable database with just the structure or definition of the source database but without any table or index user or application data.

This feature can help in the rapid provisioning of test or development environments where just the structure of the production database is required and after the pluggable database has been created it will be populated with some test data.

In this example we are creating the DEV4 pluggable database which just has the data dictionary and metadata of the source DEV1 database. Note the use of the NO DATA clause.


SQL> conn / as sysdba
Connected.

SQL> ! mkdir /oradata/cdb1/dev4

SQL> CREATE PLUGGABLE DATABASE dev4 FROM dev1
  2  FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev4/')
  3  NO DATA;

Pluggable database created.

SQL> alter pluggable database dev4 open;

Pluggable database altered.

SQL> alter session set container=dev4;

Session altered.

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
         0

SQL> select count(*) from system.mytables;

  COUNT(*)
----------
         0


SQL> select file_name, tablespace_name from dba_data_files;

FILE_NAME                                                              TABLESPACE_NAME
---------------------------------------------------------------------- ------------------------------
/oradata/cdb1/dev4/system01.dbf                                        SYSTEM
/oradata/cdb1/dev4/sysaux01.dbf                                        SYSAUX
/oradata/cdb1/dev4/dev1_users01.dbf                                    USERS
/oradata/cdb1/dev4/dev1_test_data01.dbf                                TEST_DATA


PDB State Management Across CDB Restart

In Oracle 12c version 12.1.0.1, when we started a CDB, by default all the PDB’s except the seed we left in MOUNTED state and we had to issue an explicit ALTER PLUGGABLE DATABASE ALL OPEN command to open all the PDB’s.

SQL> startup;
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.


SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           MOUNTED
DEV2                           MOUNTED
DEV3                           MOUNTED
DEV4                           MOUNTED

SQL> alter pluggable database all open;

Pluggable database altered.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           READ WRITE
DEV2                           READ WRITE
DEV3                           READ WRITE
DEV4                           READ WRITE

Now in 12.1.0.2 using the SAVE STATE command we can preserve the open mode of a pluggable database (PDB) across multitenant container database (CDB) restarts.

So if a PDB was open in READ WRITE mode when a CDB was shut down, when we restart the CDB all the PDB’s which were in READ WRITE mode when the CDB was shut down will be opened in the same READ WRITE mode automatically without the DBA having to execute the ALTER PLUGGABLE DATABASE ALL OPEN command which was required in the earlier 12c version.

SQL> sho con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> alter pluggable database all save state;

Pluggable database altered.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup;
ORACLE instance started.

Total System Global Area  805306368 bytes
Fixed Size                  2929552 bytes
Variable Size             318770288 bytes
Database Buffers          478150656 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

SQL> select name,open_mode from v$pdbs;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
DEV1                           READ WRITE
DEV2                           READ WRITE
DEV3                           READ WRITE
DEV4                           READ WRITE

PDB Remote Clone

In 12.1.0.2 we can now create a PDB from a non-CDB source by cloning it over a database link. This feature further enhances the rapid provisioining of pluggable databases.

In non-CDB:

SQL> grant create pluggable database to system;

Grant succeeded.

In CDB root – create a database link to the non-CDB:

SQL> create database link non_cdb_link
  2  connect to system identified by oracle using 'upgr';

Database link created.

SQL> select * from dual@non_cdb_link;

D
-
X

Now shut down the non-CDB and open it in READ ONLY mode.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area  826277888 bytes
Fixed Size                  2929792 bytes
Variable Size             322964352 bytes
Database Buffers          494927872 bytes
Redo Buffers                5455872 bytes
Database mounted.

SQL> alter database open read only;

Database altered.


Create the pluggable database DEV5 from the non-CDB source using the database link we just created.


CREATE PLUGGABLE DATABASE dev5 FROM dev1@non_cdb_link
FILE_NAME_CONVERT = ('/oradata/cdb1/dev1/', '/oradata/cdb1/dev5/');

After the PBD has been created we will now need to run the noncdb_to_pdb.sql script and then open the PDB.


SQL> alter session set container=dev5;

Session altered.

SQL> @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql


SQL> alter pluggable database open;

Pluggable database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
/oradata/cdb1/undotbs01.dbf
/oradata/cdb1/dev5/system01.dbf
/oradata/cdb1/dev5/sysaux01.dbf
/oradata/cdb1/dev5/users01.dbf
/oradata/cdb1/dev5/aq01.dbf


Oracle Goldengate 12c on DBFS for RAC and Exadata

$
0
0

Let us take a look at the process of configuring Goldengate 12c to work in an Oracle 12c Grid Infrastructure RAC or Exadata environment using DBFS on Linux x86-64.

Simply put the Oracle Database File System (DBFS) is a standard file system interface on top of files and directories that are stored in database tables as LOBs.

In one of my earlier posts we had seen how we can configure Goldengate in an Oracle 11gR2 RAC environment using ACFS as the shared location.

Until recently Exadata did not support using ACFS but ACFS is now supported on version 12.1.0.2 of the RAC Grid Infrastructure.

In this post we will see how the Oracle DBFS (Database File System) will be setup and configured and used as the shared location for some of the key Goldengate files like the trail files and checkpoint files.

In summary the broad steps involved are:

1) Install and configure FUSE (Filesystem in Userspace)
2) Create the DBFS user and DBFS tablespaces
3) Mount the DBFS filesystem
5) Create symbolic links for the Goldengate software directories dirchk,dirpcs, dirdat, BR to point to directories on DBFS
6) Create the Application VIP
7) Download the mount-dbfs.sh script from MOS and edit according to our environment
8) Create the DBFS Cluster Resource
9) Download and install the Oracle Grid Infrastructure Bundled Agent
10) Register Goldengate with the bundled agents using agctl utility

Install and Configure FUSE

Using the following command check if FUSE has been installed:

lsmod | grep fuse

FUSE can be installed in a couple of ways – either via the Yum repository or using the RPM’s available on the OEL software media.

Using Yum:

yum install kernel-devel
yum install fuse fuse-libs

Via RPM’s:

If installing from the media, then these are the RPM’s which are required:

kernel-devel-2.6.32-358.el6.x86_64.rpm
fuse-2.8.3-4.el6.x86_64.rpm
fuse-devel-2.8.3-4.el6.x86_64.rpm
fuse-libs-2.8.3-4.el6.x86_64.rpm

A group named fuse must be created and the OS user who will be mounting the DBFS filesystem needs to be added to the fuse group.

For example if the OS user is ‘oracle’, then we use the usermod command to modify the secondary group membership for the oracle user. Important is to ensure we do not exclude any current groups the user already is a member of.

# /usr/sbin/groupadd fuse

# usermod -G dba,fuse oracle

One of the mount options which we will use is called “allow_other” which will enable users other than the user who mounted the DBFS file system to access the file system.

The /etc/fuse.conf needs to have the “user_allow_other” option as shown below.

$ # cat /etc/fuse.conf
user_allow_other

chmod 644 /etc/fuse.conf

Important: Ensure that the variable LD_LIBRARY_PATH is set and includes the path to $ORACLE_HOME/lib. Otherwise we will get an error when we try to mount the DBFS using the dbfs_client executable.

Create the DBFS tablespaces and mount the DBFS

If the source database used by Goldengate Extract is running on RAC or hosted on Exadata then we will create ONE tablespace for DBF.

If the target database where Replicat will be applying changes in on RAC or Exadata, then we will create TWO tableapaces for DBFS with each tablespace having different logging and caching settings – typically one tablespace will be used for the Goldengate trail files and the other for the Goldengate checkpoint files.

If using Exadata then typically an ASM disk group called DBFS_DG will already be available for us to use, otherwise on an non-Exadata platform we will create a separate ASM disk group for holding DBFS files.

Note than since we will be storing Goldengate trail files on DBFS, a best practice would be to allocate enough disk space/tablespace space to be able to retain at least a minimum of 12 hours of trail files. So we need to keep that in mind when we create the ASM disk group or create the DBFS tablespace.

CREATE bigfile TABLESPACE dbfs_ogg_big datafile '+DBFS_DG' SIZE
1000M autoextend ON NEXT 100M LOGGING EXTENT MANAGEMENT LOCAL
AUTOALLOCATE SEGMENT SPACE MANAGEMENT AUTO;

Create the DBFS user

CREATE USER dbfs_user IDENTIFIED BY dbfs_pswd 
DEFAULT TABLESPACE dbfs_ogg_big
QUOTA UNLIMITED ON dbfs_ogg_big;

GRANT create session, create table, create view, 
create procedure, dbfs_role TO dbfs_user; 


Create the DBFS Filesystem

To create the DBFS filesystem we connect as the DBFS_USER Oracle user account and either run the dbfs_create_filesystem.sql or dbfs_create_filesystem_advanced.sql script located under $ORACLE_HOME/rdbms/admin directory.

For example:

cd $ORACLE_HOME/rdbms/admin 

sqlplus dbfs_user/dbfs_pswd 


SQL> @dbfs_create_filesystem dbfs_ogg_big  gg_source

OR

SQL> @dbfs_create_filesystem_advanced.sql dbfs_ogg_big  gg_source
      nocompress nodeduplicate noencrypt non-partition 

Where …
o dbfs_ogg_big: tablespace for the DBFS database objects
o gg_source: filesystem name, this can be any string and will appear as a directory under the mount point

If we were configuring DBFS on the Goldengate target or Replicat side of things,it is recommended to use the NOCACHE LOGGING attributes for the tablespace which holds the trail files because of the sequential reading and writing nature of the trail files.

For the checkpoint files on the other hand it is recommended to use CACHING and LOGGING attributes instead.

The example shown below illustrates how we can modify the LOB attributes.(assuming we have created two DBFS tablespaces)

SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%'; 

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             NO        YES



SQL> ALTER TABLE dbfs_user.T_DBFS_SM 
     MODIFY LOB (FILEDATA) (CACHE LOGGING); 


SQL> SELECT table_name, segment_name, cache, logging FROM dba_lobs 
     WHERE tablespace_name like 'DBFS%';  

TABLE_NAME              SEGMENT_NAME                CACHE     LOGGING
----------------------- --------------------------- --------- -------
T_DBFS_BIG              LOB_SFS$_FST_1              NO        YES
T_DBFS_SM               LOB_SFS$_FST_11             YES       YES


As the user root, now create the DBFS mount point on ALL nodes of the RAC cluster (or Exadata compute servers).


# cd /mnt 
# mkdir DBFS 
# chown oracle:oinstall DBFS/

Create a custom tnsnames.ora file in a separate location (on each node of the RAC cluster).

In our 2 node RAC cluster for example these are entries we will make for the ORCL RAC database.

Node A

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl1)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl1')
      )
  (CONNECT_DATA=(SID=orcl1))
)

Node B

orcl =
  (DESCRIPTION =
      (ADDRESS =
        (PROTOCOL=BEQ)
        (PROGRAM=/u02/app/oracle/product/12.1.0/dbhome_1/bin/oracle)
        (ARGV0=oracleorcl2)
        (ARGS='(DESCRIPTION=(LOCAL=YES)(ADDRESS=(PROTOCOL=BEQ)))')
        (ENVS='ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1,ORACLE_SID=orcl2')
      )
  (CONNECT_DATA=(SID=orcl2))
)


 

We will need to provide the password for the DBFS_USER database user account when we mount the DBFS filesystem via the dbfs_mount command. We can either store the password in a text file or we can use Oracle Wallet to encrypt and store the password.

In this example we are not using the Oracle Wallet, so we need to create a file (on all nodes of the RAC cluster) which will contain the DBFS_USER password.

For example:


echo dbfs_pswd > passwd.txt 

nohup $ORACLE_HOME/bin/dbfs_client dbfs_user@orcl -o allow_other,direct_io /mnt/DBFS < ~/passwd.txt &

After the DBFS filesystem is mounted successfully we can now see it via the ‘df’ command like shown below. Note in this case we had created a tablespace of 5 GB for DBFS and the space allocated and used displays that.


$  df -h |grep dbfs

dbfs-dbfs_user@:/     4.9G   11M  4.9G   1% /mnt/dbfs

The command used to unmount the DBFS filesystem would be:

fusermount -u 

Create links from Oracle Goldengate software directories to DBFS

Create the following directories on DBFS

$ mkdir /mnt/gg_source/goldengate 
$ cd /mnt/gg_source/goldengate 
$ mkdir dirchk
$ mkdir dirpcs 
$ mkdir dirprm
$ mkdir dirdat
$ mkdir BR

Make the symbolic links from Goldengate software directories to DBFS

cd /u03/app/oracle/goldengate
mv dirchk dirchk.old
mv dirdat dirdat.old
mv dirpcs dirpcs.old
mv dirprm dirprm.old
mv BR BR.old
ln -s /mnt/dbfs/gg_source/goldengate/dirchk dirchk
ln -s /mnt/dbfs/gg_source/goldengate/dirdat dirdat
ln -s /mnt/dbfs/gg_source/goldengate/dirprm dirprm
ln -s /mnt/dbfs/gg_source/goldengate/dirpcs dirpcs
ln -s /mnt/dbfs/gg_source/goldengate/BR BR

For example :

[oracle@rac2 goldengate]$ ls -l dirdat
lrwxrwxrwx 1 oracle oinstall 26 May 16 15:53 dirdat -> /mnt/dbfs/gg_source/goldengate/dirdat

Also copy the jagent.prm file which comes out of the box located in the dirprm directory

[oracle@rac2 dirprm.old]$ pwd
/u03/app/oracle/goldengate/dirprm.old
[oracle@rac2 dirprm.old]$ cp jagent.prm /mnt/dbfs/gg_source/dirprm

Note – in the Extract parameter file(s) we need to include the BR parameter pointing to the DBFS stored directory

BR BRDIR /mnt/dbfs/gg_source/goldengate/BR
 

Create the Application VIP

Typically the Goldengate source and target databases will be located outside the same Exadata machine and even in a non-Exadata RAC environment the source and target databases are on usually on different RAC clusters. In that case we have to use an Application VIP which is a cluster resource managed by Oracle Clusterware and the VIP assigned to one node will be seamlessly transferred to another surviving node in the event of a RAC (or Exadata compute) node failure.

Run the appvipcfg command to create the Application VIP as shown in the example below.


$GRID_HOME/bin/appvipcfg create -network=1 -ip= 192.168.56.90 -vipname=gg_vip_source -user=root

We have to assign an unused IP address to the Application VIP. We run the following command to identify the value we use for the network parameter as well as the subnet for the VIP.

$ crsctl stat res -p |grep -ie .network -ie subnet |grep -ie name -ie subnet

NAME=ora.net1.network
USR_ORA_SUBNET=192.168.56.0

As root give the Oracle Database software owner permissions to start the VIP.

$GRID_HOME/bin/crsctl setperm resource gg_vip_source -u user:oracle:r-x 

As the Oracle database software owner start the VIP

$GRID_HOME/bin/crsctl start resource gg_vip_source

Verify the status of the Application VIP


$GRID_HOME/bin/crsctl status resource gg_vip_source

 

Download the mount-dbfs.sh script from MOS

Download the mount-dbfs.sh script from MOS note 1054431.1.

Copy it to a temporary location on one of the Linux RAC nodes and run the command as root:

# dos2unix /tmp/mount-dbfs.sh

Change the ownership of the file to the Oracle Grid Infrastructure owner and also copy the file to the $GRID_HOME/crs/script directory location.

Next make changes to the environment variable settings section of the mouny-dbfs.sh script as required. These are the changes I made to the script.

### Database name for the DBFS repository as used in "srvctl status database -d $DBNAME"
DBNAME=orcl

### Mount point where DBFS should be mounted
MOUNT_POINT=/mnt/dbfs

### Username of the DBFS repository owner in database $DBNAME
DBFS_USER=dbfs_user

### RDBMS ORACLE_HOME directory path
ORACLE_HOME=/u02/app/oracle/product/12.1.0/dbhome_1

### This is the plain text password for the DBFS_USER user
DBFS_PASSWD=dbfs_user

### TNS_ADMIN is the directory containing tnsnames.ora and sqlnet.ora used by DBFS
TNS_ADMIN=/u02/app/oracle/admin

### TNS alias used for mounting with wallets
DBFS_LOCAL_TNSALIAS=orcl

Create the DBFS Cluster Resource

Before creating the Cluster Resource for DBFS,test the mount-dbfs.sh script

$ ./mount-dbfs.sh start
$ ./mount-dbfs.sh status
Checking status now
Check – ONLINE

$ ./mount-dbfs.sh stop

As the Grid Infrastructure owner create a script called add-dbfs-resource.sh and store it in the $ORACLE_HOME/crs/script directory.

This script will create a Cluster Managed Resource called dbfs_mount by calling the Action Script mount-dbfs.sh which we had created earlier.

Edit the following variables in the script as shown below:

ACTION_SCRIPT
RESNAME
DEPNAME ( this can be the Oracle database or a database service)
ORACLE_HOME

#!/bin/bash
ACTION_SCRIPT=/u02/app/12.1.0/grid/crs/script/mount-dbfs.sh
RESNAME=dbfs_mount
DEPNAME=ora.orcl.db
ORACLE_HOME=/u01/app/12.1.0.2/grid
PATH=$ORACLE_HOME/bin:$PATH
export PATH ORACLE_HOME
crsctl add resource $RESNAME \
-type cluster_resource \
-attr "ACTION_SCRIPT=$ACTION_SCRIPT, \
CHECK_INTERVAL=30,RESTART_ATTEMPTS=10, \
START_DEPENDENCIES='hard($DEPNAME)pullup($DEPNAME)',\
STOP_DEPENDENCIES='hard($DEPNAME)',\
SCRIPT_TIMEOUT=300"

Execute the script – it should produce no output.

./ add-dbfs-resource.sh

 

Download and Install the Oracle Grid Infrastructure Bundled Agent

Starting with Oracle 11.2.0.3 on 64-bit Linux,out-of-the-box Oracle Grid Infrastructure bundled agents were introduced which had predefined clusterware resources for applications like Siebel and Goldengate.

The bundled agent for Goldengate provided integration between Oracle Goldengate and dependent resources like the database, filesystem and the network.

The AGCTL agent command line utility can be used to start and stop Goldengate as well as relocate Goldengate resources between nodes in the cluster.

Download the latest version of the agent (6.1) from the URL below:

http://www.oracle.com/technetwork/database/database-technologies/clusterware/downloads/index.html

The downloaded file will be xagpack_6.zip.

There is an xag/bin directory with the agctl executable already existing in the $GRID_HOME root directory. We need to install the new bundled agent in a separate directory and ensure the $PATH includes

Unzip the xagpack_6.zip in a temporary location on one of the RAC nodes.

To install the Oracle Grid Infrastructure Agents we run the xagsetup.sh script as shown below:

xagsetup.sh --install --directory [{–nodes | –all_nodes}]

Register Goldengate with the bundled agents using agctl utility

Using agctl utility create the GoldenGate configuration.

Ensure that we are running agctl from the downloaded bundled agent directory and not from the $GRID_HOME/xag/bin directory or ensure that the $PATH variable has been amended as described earlier.

/home/oracle/xagent/bin/agctl add goldengate gg_source --gg_home /u03/app/oracle/goldengate \
--instance_type source \
--nodes rac1,rac2 \
--vip_name gg_vip_source \
--filesystems dbfs_mount --databases ora.orcl.db \
--oracle_home /u02/app/oracle/product/12.1.0/dbhome_1 \
--monitor_extracts ext1,extdp1
 

Once GoldenGate is registered with the bundled agent, we should only use agctl to start and stop Goldengate processes. The agctl command will start the Manager process which in turn will start the other processes like Extract, Data Pump and Replicat if we have configured them for automatic restart.

Let us look at some examples of using agctl.

Check the Status – note the DBFS filesystem is also mounted currently on node rac2

$ pwd
/home/oracle/xagent/bin
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2


$ cd /mnt/dbfs/
$ ls -lrt
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

Stop the Goldengate environment

$ ./agctl stop goldengate gg_source 
$ ./agctl status goldengate gg_source
Goldengate  instance ' gg_source ' is not running

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     STOPPED     EXT1        00:00:03      00:01:19
EXTRACT     STOPPED     EXTDP1      00:00:00      00:01:18

Start the Goldengate environment – note the resource has relocated to node rac1 from rac2 and the Goldengate processes on rac2 have been stopped and started on node rac1.

$ ./agctl start goldengate gg_source
$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac1

GGSCI (rac2.localdomain) 2> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED


GGSCI (rac1.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:09      00:00:06
EXTRACT     RUNNING     EXTDP1      00:00:00      00:05:22

We can also see that the agctl has unmounted DBFS on rac2 and mounted it on rac1 automatically.

[oracle@rac1 goldengate]$ ls -l /mnt/dbfs
total 0
drwxrwxrwx 9 root root 0 May 16 15:37 gg_source

[oracle@rac2 goldengate]$ ls -l /mnt/dbfs
total 0

Lets test the whole thing!!

Now that we see that the Goldengate resources are running on node rac1,let us see what happens when we reboot that node to simulate a node failure when Goldengate is up and running and the Extract and Data Pump processes are running on the source.

AGCTL and Cluster Services will relocate all the Goldengate resources, VIP, DBFS to the other node seamlessly and we see that the Extract and Data Pump processes have been automatically started up on node rac2.

[oracle@rac1 goldengate]$ su -
Password:
[root@rac1 ~]# shutdown -h now

Broadcast message from oracle@rac1.localdomain
[root@rac1 ~]#  (/dev/pts/0) at 19:45 ...

The system is going down for halt NOW!

Connect to the surviving node rac2 and check ……

[oracle@rac2 bin]$ ./agctl status goldengate gg_source
Goldengate  instance 'gg_source' is running on rac2

GGSCI (rac2.localdomain) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:07      00:00:02
EXTRACT     RUNNING     EXTDP1      00:00:00      00:00:08

Check the Cluster Resource ….

oracle@rac2 bin]$ crsctl stat res dbfs_mount -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
dbfs_mount
      1        ONLINE  ONLINE       rac2                     STABLE
--------------------------------------------------------------------------------

Oracle 12c Pluggable Database Upgrade

$
0
0

Until very recently I had really believed the marketing hype and sales pitch about how in 12c database upgrades are so much faster and easier than earlier releases – just unplug the PDB from one container and plug it in to another container and bingo you have an upgraded database!

Partly true …. maybe about 20%!

As Mike Dietrich from Oracle Corp. has rightly pointed out on his great blog (http://blogs.oracle.com/upgrade),it is not as straight forward as pointed out in slides seen I am sure by many of us at various Oracle conferences showcasing Oracle database 12c.

I tested out the upgrade of a PDB from version 12.1.0.1 to the latest 12c version 12.1.0.2 and here are the steps taken.

Note: If we are upgrading the entire CDB and all the PDB’s the steps would be different.

In this case we are upgrading just of the pluggable databases to a higher database software version.
 

Run the preupgrd.sql script and pre-upgrade fixup script

 
Connect to the 12.1.0.1 target database and run the preupgrd.sql script.

The source container database is cdb3 and the PDB which we are upgrading is pdb_gavin.

[oracle@edmbr52p5 ~]$ . oraenv
ORACLE_SID = [cdb1] ? cdb3

The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0.1/dbhome_1 is /u01/app/oracle
[oracle@edmbr52p5 ~]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.1.0 Production on Fri Aug 21 10:49:21 2015

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> alter session set container=pdb_gavin;

Session altered.

SQL> @?/rdbms/admin/preupgrd.sql
Loading Pre-Upgrade Package...
Executing Pre-Upgrade Checks...
Pre-Upgrade Checks Complete.
      ************************************************************

Results of the checks are located at:
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/preupgrade.log

Pre-Upgrade Fixup Script (run in source database environment):
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/preupgrade_fixups.sql

Post-Upgrade Fixup Script (run shortly after upgrade):
 /u01/app/oracle/cfgtoollogs/cdb3/preupgrade/postupgrade_fixups.sql

      ************************************************************

         Fixup scripts must be reviewed prior to being executed.

      ************************************************************

      ************************************************************
                   ====>> USER ACTION REQUIRED  <<====
      ************************************************************

 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.


 1) Check Tag:    OLS_SYS_MOVE
    Check Summary: Check if SYSTEM.AUD$ needs to move to SYS.AUD$ before upgrade
    Fixup Summary:
     "Execute olspreupgrade.sql script prior to upgrade."
    +++ Source Database Manual Action Required +++

            You MUST resolve the above error prior to upgrade

      ************************************************************

The execution of the preupgrd.sql script will generate 3 separate files.

1)preupgrade.log
2)preupgrade_fixups.sql
3)postupgrade_fixups.sql

Let us examine the contents of the preupgrade.log file.

Oracle Database Pre-Upgrade Information Tool 08-21-2015 10:50:04
Script Version: 12.1.0.1.0 Build: 006
**********************************************************************
   Database Name:  CDB3
         Version:  12.1.0.1.0
      Compatible:  12.1.0.0.0
       Blocksize:  8192
        Platform:  Linux x86 64-bit
   Timezone file:  V18
**********************************************************************
                          [Renamed Parameters]
                     [No Renamed Parameters in use]
**********************************************************************
**********************************************************************
                    [Obsolete/Deprecated Parameters]
             [No Obsolete or Desupported Parameters in use]
**********************************************************************
                            [Component List]
**********************************************************************
--> Oracle Catalog Views                   [upgrade]  VALID
--> Oracle Packages and Types              [upgrade]  VALID
--> JServer JAVA Virtual Machine           [upgrade]  VALID
--> Oracle XDK for Java                    [upgrade]  VALID
--> Real Application Clusters              [upgrade]  OPTION OFF
--> Oracle Workspace Manager               [upgrade]  VALID
--> OLAP Analytic Workspace                [upgrade]  VALID
--> Oracle Label Security                  [upgrade]  VALID
--> Oracle Database Vault                  [upgrade]  VALID
--> Oracle Text                            [upgrade]  VALID
--> Oracle XML Database                    [upgrade]  VALID
--> Oracle Java Packages                   [upgrade]  VALID
--> Oracle Multimedia                      [upgrade]  VALID
--> Oracle Spatial                         [upgrade]  VALID
--> Oracle Application Express             [upgrade]  VALID
--> Oracle OLAP API                        [upgrade]  VALID
**********************************************************************
           [ Unsupported Upgrade: Tablespace Data Supressed ]
**********************************************************************
**********************************************************************
                          [Pre-Upgrade Checks]
**********************************************************************
ERROR: --> SYSTEM.AUD$ (audit records) Move

    An error occured retrieving a count from SYSTEM.AUD$
    This can happen when the table has already been cleaned up.
    The olspreupgrade.sql script should be re-executed.



WARNING: --> Existing DBMS_LDAP dependent objects

     Database contains schemas with objects dependent on DBMS_LDAP package.
     Refer to the Upgrade Guide for instructions to configure Network ACLs.
     USER APEX_040200 has dependent objects.


**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                     [Post-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ******** Fixed Object Statistics ********
                        *****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
   EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                   ************  Summary  ************

 1 ERROR exist that must be addressed prior to performing your upgrade.
 2 WARNINGS that Oracle suggests are addressed to improve database performance.
 0 INFORMATIONAL messages messages have been reported.

 After your database is upgraded and open in normal mode you must run
 rdbms/admin/catuppst.sql which executes several required tasks and completes
 the upgrade process.

 You should follow that with the execution of rdbms/admin/utlrp.sql, and a
 comparison of invalid objects before and after the upgrade using
 rdbms/admin/utluiobj.sql

 If needed you may want to upgrade your timezone data using the process
 described in My Oracle Support note 977512.1
                   ***********************************

So as part of the pre-upgrade preparation we execute :

SQL> @?/rdbms/admin/olspreupgrade.sql

and 

SQL>  EXECUTE dbms_stats.gather_dictionary_stats;

Unplug the PDB from the 12.1.0.1 Container Database

SQL>  alter session set container=CDB$ROOT;

Session altered.

SQL> alter pluggable database  pdb_gavin unplug into '/home/oracle/pdb_gavin.xml';

Pluggable database altered

Create the PDB in the 12.1.0.2 Container Database

[oracle@edmbr52p5 ~]$ . oraenv
ORACLE_SID = [cdb2] ? cdb1
The Oracle base for ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1 is /u01/app/oracle

[oracle@edmb]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Fri Aug 21 12:04:10 2015

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options


SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT


SQL> create pluggable database pdb_gavin
  2   using '/home/oracle/pdb_gavin.xml'
  3  nocopy
  4  tempfile reuse;

Pluggable database created..

Upgrade the PDB to 12.1.0.2

After the pluggable database has been created in the 12.1.0.2 container, we will open it with the UPGRADE option in order to run the catupgrd.sql database upgrade script.

We can see that we receive some errors which we can ignore safely as we are in the middle of an upgrade to the PDB.

SQL> alter pluggable database pdb_gavin open upgrade;

Warning: PDB altered with errors.


SQL> select message, status from pdb_plug_in_violations where type like '%ERR%';

MESSAGE
--------------------------------------------------------------------------------
STATUS
---------
Character set mismatch: PDB character set US7ASCII. CDB character set AL32UTF8.
RESOLVED

PDB's version does not match CDB's version: PDB's version 12.1.0.1.0. CDB's vers
ion 12.1.0.2.0.
PENDING

We now run the catctl.pl perl script and we specify the PDB name (if we were upgrading multiple PDBs hee we would separate each PDB name with a comma) – not that we are also running the upgrade in parallel.

[oracle@edm ~]$ cd $ORACLE_HOME/rdbms/admin
[oracle@edm admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -c "PDB_GAVIN" -n 4 -l /tmp catupgrd.sql

Argument list for [catctl.pl]
SQL Process Count     n = 4
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /tmp
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = 0
Run in                c = PDB_GAVIN
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 0

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrd_catcon_19456.lst
catcon: See /tmp/catupgrd*.log files for output generated by scripts
catcon: See /tmp/catupgrd_*.lst files for spool files, if any
Number of Cpus        = 8
Parallel PDB Upgrades = 2
SQL PDB Process Count = 2
SQL Process Count     = 4

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1_1
PDB_GAVIN
PDB Inclusion:[PDB_GAVIN] Exclusion:[]

Starting
[/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl catctl.pl -c 'PDB_GAVIN' -n 2 -l /tmp -I -i pdb_gavin catupgrd.sql]

Argument list for [catctl.pl]
SQL Process Count     n = 2
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /tmp
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = pdb_gavin
Run in                c = PDB_GAVIN
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 1

catctl.pl version: 12.1.0.2.0
Oracle Base           = /u01/app/oracle

Analyzing file catupgrd.sql
Log files in /tmp
catcon: ALL catcon-related output will be written to /tmp/catupgrdpdb_gavin_catcon_19562.lst
catcon: See /tmp/catupgrdpdb_gavin*.log files for output generated by scripts
catcon: See /tmp/catupgrdpdb_gavin_*.lst files for spool files, if any
Number of Cpus        = 8
SQL PDB Process Count = 2
SQL Process Count     = 2

[CONTAINER NAMES]

CDB$ROOT
PDB$SEED
PDB1_1
PDB_GAVIN
PDB Inclusion:[PDB_GAVIN] Exclusion:[]

------------------------------------------------------
Phases [0-73]
Container Lists Inclusion:[PDB_GAVIN] Exclusion:[]
Serial   Phase #: 0 Files: 1     Time: 15s   PDB_GAVIN
Serial   Phase #: 1 Files: 5     Time: 107s  PDB_GAVIN
Restart  Phase #: 2 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #: 3 Files: 18    Time: 40s   PDB_GAVIN
Restart  Phase #: 4 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #: 5 Files: 5     Time: 43s   PDB_GAVIN
Serial   Phase #: 6 Files: 1     Time: 18s   PDB_GAVIN
Serial   Phase #: 7 Files: 4     Time: 11s   PDB_GAVIN
Restart  Phase #: 8 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #: 9 Files: 62    Time: 110s  PDB_GAVIN
Restart  Phase #:10 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:11 Files: 1     Time: 28s   PDB_GAVIN
Restart  Phase #:12 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:13 Files: 91    Time: 8s    PDB_GAVIN
Restart  Phase #:14 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:15 Files: 111   Time: 15s   PDB_GAVIN
Restart  Phase #:16 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:17 Files: 3     Time: 2s    PDB_GAVIN
Restart  Phase #:18 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:19 Files: 32    Time: 43s   PDB_GAVIN
Restart  Phase #:20 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:21 Files: 3     Time: 11s   PDB_GAVIN
Restart  Phase #:22 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:23 Files: 23    Time: 75s   PDB_GAVIN
Restart  Phase #:24 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:25 Files: 11    Time: 25s   PDB_GAVIN
Restart  Phase #:26 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:27 Files: 1     Time: 1s    PDB_GAVIN
Restart  Phase #:28 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:30 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:31 Files: 257   Time: 29s   PDB_GAVIN
Serial   Phase #:32 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:33 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:34 Files: 1     Time: 3s    PDB_GAVIN
Restart  Phase #:35 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:36 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:37 Files: 4     Time: 62s   PDB_GAVIN
Restart  Phase #:38 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:39 Files: 13    Time: 33s   PDB_GAVIN
Restart  Phase #:40 Files: 1     Time: 0s    PDB_GAVIN
Parallel Phase #:41 Files: 10    Time: 5s    PDB_GAVIN
Restart  Phase #:42 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:43 Files: 1     Time: 7s    PDB_GAVIN
Restart  Phase #:44 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:45 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:46 Files: 1     Time: 0s    PDB_GAVIN
Restart  Phase #:47 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:48 Files: 1     Time: 71s   PDB_GAVIN
Restart  Phase #:49 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:50 Files: 1     Time: 9s    PDB_GAVIN
Restart  Phase #:51 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:52 Files: 1     Time: 41s   PDB_GAVIN
Restart  Phase #:53 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:54 Files: 1     Time: 51s   PDB_GAVIN
Restart  Phase #:55 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:56 Files: 1     Time: 36s   PDB_GAVIN
Restart  Phase #:57 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:58 Files: 1     Time: 37s   PDB_GAVIN
Restart  Phase #:59 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:60 Files: 1     Time: 48s   PDB_GAVIN
Restart  Phase #:61 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:62 Files: 1     Time: 112s  PDB_GAVIN
Restart  Phase #:63 Files: 1     Time: 0s    PDB_GAVIN
Serial   Phase #:64 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only -pdbs PDB_GAVIN > /tmp/catupgrdpdb_gavin_datapatch_upgrade.log 2> /tmp/catupgrdpdb_gavin_datapatch_upgrade.err
returned from sqlpatch
    Time: 3s    PDB_GAVIN
Serial   Phase #:66 Files: 1     Time: 1s    PDB_GAVIN
Serial   Phase #:68 Files: 1     Time: 12s   PDB_GAVIN
Serial   Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/u01/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/u01/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /u01/app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -pdbs PDB_GAVIN > /tmp/catupgrdpdb_gavin_datapatch_normal.log 2> /tmp/catupgrdpdb_gavin_datapatch_normal.err
returned from sqlpatch
    Time: 3s    PDB_GAVIN
Serial   Phase #:70 Files: 1     Time: 30s   PDB_GAVIN
Serial   Phase #:71 Files: 1     Time: 4s    PDB_GAVIN
Serial   Phase #:72 Files: 1     Time: 3s    PDB_GAVIN
Serial   Phase #:73 Files: 1     Time: 0s    PDB_GAVIN

Grand Total Time: 1155s PDB_GAVIN

LOG FILES: (catupgrdpdb_gavin*.log)

Upgrade Summary Report Located in:
/u01/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/cdb1/upgrade/upg_summary.log

Total Upgrade Time:          [0d:0h:19m:15s]

     Time: 1156s For PDB(s)

Grand Total Time: 1156s

LOG FILES: (catupgrd*.log)

Grand Total Upgrade Time:    [0d:0h:19m:16s]
[oracle@edmbr52p5 admin]$


Run the post upgrade steps

We then start the PDB and run the post-upgrade steps which includes recompiling all the invalid objects and also gathering fresh statistics on the fixed dictionary objects.

That completes the PDB upgrade – not quite a simple plug and unplug!!

SQL> startup;
Pluggable Database opened.


SQL> @?/rdbms/admin/utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2015-08-21 12:35:42

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2015-08-21 12:36:02

DOC> The following query reports the number of objects that have compiled
DOC> with errors.
DOC>
DOC> If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


Function created.


PL/SQL procedure successfully completed.


Function dropped.

...Database user "SYS", database schema "APEX_040200", user# "98" 12:36:13
...Compiled 0 out of 3014 objects considered, 0 failed compilation 12:36:13
...271 packages
...263 package bodies
...452 tables
...11 functions
...16 procedures
...3 sequences
...457 triggers
...1320 indexes
...211 views
...0 libraries
...6 types
...0 type bodies
...0 operators
...0 index types
...Begin key object existence check 12:36:13
...Completed key object existence check 12:36:13
...Setting DBMS Registry 12:36:13
...Setting DBMS Registry Complete 12:36:13
...Exiting validate 12:36:13

PL/SQL procedure successfully completed.

SQL>

SQL> EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

PL/SQL procedure successfully completed.



SQL> SELECT NAME,OPEN_MODE FROM V$PDBS;

NAME                           OPEN_MODE
------------------------------ ----------
PDB$SEED                       READ ONLY
PDB_GAVIN                      READ WRITE


Wrong Results On Query With Subquery Using OR EXISTS After upgrade to 12.1.0.2

$
0
0

Recently one my clients encountered an issue with a SQL query which returned no rows in the 12c database which had been upgraded, but was returning rows in any of the 11g databases which had not been upgraded as yet.

The query was


SELECT *
  FROM STORAGE t0
  WHERE ( ( ( ( ( ( (ROWNUM <= 30) AND (t0.BUSINESS_UNIT_ID = 2))   AND (t0.PLCODE = 1001))
                  AND (t0.SM_SERIALNUM = '5500100000149000994'))
                  AND ( (t0.SM_MODDATE IS NULL) OR (t0.SM_MODDATE <= SYSDATE)))
                AND   ( 
                        (t0.DEALER_ID IS NULL)
                         OR 
                        EXISTS   (SELECT t1.CUSTOMER_ID  FROM CUSTOMER_ALL t1 WHERE ( (t1.CUSTOMER_ID = t0.DEALER_ID) AND (t1.CSTYPE <> 'd')))
                        )
        )
        AND (t0.SM_STATUS <> 'b'));

If we added the hint /*+ optimizer_features_enable(‘11.2.0.4’) */ to the query it worked fine.

After a bit of investigation we found that we could possibly be hitting this bug

Bug 18650065 : WRONG RESULTS ON QUERY WITH SUBQUERY USING OR EXISTS

The solution was either to enable this hidden parameter at the session or database level or to apply the patch 18650065 which is now available for download from MOS.

ALTER SESSION SET “_optimizer_null_accepting_semijoin”=FALSE;

The patch 18650065 can be applied online in both a non-RAC as well as RAC environment

For Non-RAC Environments 

$ opatch apply online -connectString orcl:SYS:SYS_PASSWORD

For RAC Environments

2 node RAC example:

$ opatch apply online -connectString orcl1:SYS:SYS_PASSWORD:node1, orcl2:SYS:SYS_PASSWORD:node2



Oracle 12c RMAN DUPLICATE Database

$
0
0

In earlier versions the RMAN DUPLICATE database command was a push-based method. One of the new features in Oracle 12c is that it has been changed to a pull-based method which has many advantages. Let us note the difference between the two methods.

In the earlier push-based method, the source database transfers the required database files to the auxiliary database as image copies. Now let us say we had a tablespace which had a 10GB data file, but the tablespace only contained say about 1 GB of data. Regardless, since it is an image copy, the entire 10 GB data file had to be copied over the network.

Now in Oracle 12c RMAN performs active database duplication using backup sets and not image copies. Taking the earlier example of a tablespace having a 10GB data file but say having only 1 GB of occupied data, only the 1 GB is now copied over the network as a backup set and not the entire 10 GB data file.

With backupsets there are a number of advantages.

So now in Oracle 12c this is what is new in the DUPLICATE …. FROM ACTIVE DATABASE command. And these new features certainly are providing advantages over the earlier pre-12c method.

  • RMAN can employ unused block compression while creating backups, thus reducing the size of backups that are transported over the network (USING BACKUPSET, USING COMPRESSED BACKUPS clause).
  • Using multi-section backups, backup sets can be created in parallel on the source database (SECTION SIZE clause).
  • In addition we can also encrypt backup sets created on the source database via the SET ENCRYPTION command.

Let us look at an example using the pull-based method to create a duplicate database using RMAN backupsets from an active database.

Let us assume source database name is BSPRD and we are creating a clone of this database.

So what all preparation work we have to do for this RMAN Duplicate to work? – same as 11g – this part has not changed.

First and most important thing to do is to do the network part of the work.

Add a static entry in the listener.ora on the target and in the tnsnames.ora file on both database source and target servers add a TNS alias.

Then copy the password file from source to target and rename the file on the target if the ORACLE_SID on target is different to the source.

Create any required directories on the destination host as required if the directory path on the source and target are going to be different – for example we may need to create a directory for audit_dump_dest on the target.

If the ASM disk group names are different then we may have to connect via asmcmd on the target and create any directories we require.

Also don’t forget the DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters in the target database parameter file if the directory structure is different on the target as compared to the source.

When using the SECTION SIZE parameter take into account the sizes of the data files and the parallelism we are going to use.

In the example I have shown the RMAN parallelism has been set to 4 and two of the bigger data files are 2.2 GB and 1.5 GB – so I have used a section size of 500 MB.

Note – also now when you create the duplicate database via RMAN, we cannot just issue the “TARGET /” command in RMAN.

We have to explicitly provide the user, password as well as the TNS alias for both the target database as well as the auxiliary database.

Like for example:

rman target sys/sys_passwd@bsprd auxiliary sys/sys_passwd@bsprd_dup

Note the RMAN DUPLICATE DATABASE command – it includes the USING BACKUPSET and SECTION SIZE clauses.

Recovery Manager: Release 12.1.0.2.0 - Production on Thu Aug 27 05:27:22 2015
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.

connected to target database: BSPRD (DBID=3581332368)
connected to auxiliary database: BSPRD (not mounted)

RMAN> duplicate target database to bsprd from active database
2> using backupset
3> section size 500m;

Note the 4 auxiliary channels being created because we have configured RMAN with a parallelism of 4.

Starting Duplicate Db at 27-AUG-15
using target database control file instead of recovery catalog
allocated channel: ORA_AUX_DISK_1
channel ORA_AUX_DISK_1: SID=19714 device type=DISK
allocated channel: ORA_AUX_DISK_2
channel ORA_AUX_DISK_2: SID=19713 device type=DISK
allocated channel: ORA_AUX_DISK_3
channel ORA_AUX_DISK_3: SID=6 device type=DISK
allocated channel: ORA_AUX_DISK_4
channel ORA_AUX_DISK_4: SID=2820 device type=DISK
current log archived

The SYSTEM tablespace data file was about 2.2 GB in my case. So we can see that RMAN has split this 2.2 GB based on the section size we allocated which was 500 MB. We have 4 auxiliary channels working on ‘sections’ of the single data file in parallel.

channel ORA_AUX_DISK_1: starting datafile backup set restore
channel ORA_AUX_DISK_1: using network backup set from service bsprd
channel ORA_AUX_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_1: restoring datafile 00001 to +OEM_DATA/BSPRD/DATAFILE/system.302.888816535
channel ORA_AUX_DISK_1: restoring section 1 of 5

channel ORA_AUX_DISK_3: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_3: restoring datafile 00001 to +OEM_DATA/BSPRD/DATAFILE/system.302.888816535
channel ORA_AUX_DISK_3: restoring section 2 of 5
channel ORA_AUX_DISK_4: restore complete, elapsed time: 00:00:04


....

....

channel ORA_AUX_DISK_4: using network backup set from service bsprd
channel ORA_AUX_DISK_4: specifying datafile(s) to restore from backup set
channel ORA_AUX_DISK_4: restoring datafile 00001 to +OEM_DATA/BSPRD/DATAFILE/system.302.888816535
channel ORA_AUX_DISK_4: restoring section 5 of 5
channel ORA_AUX_DISK_1: restore complete, elapsed time: 00:00:10


GoldenGate 12c (12.2) New Features

$
0
0

At the recent Oracle Open World 2015 conference I was fortunate to attend a series of very informative presentations on Oracle GoldenGate from senior members of the Product Development team.

Among them was the presentation titled GoldenGate 12.2 New Features Deep Dive which is now available for download via the official OOW15 website.

While no official release date was announced for Goldengate 12.2, the message was being communicated that the release was going to happen ‘very soon’.

So while we eagerly wait for the official product release, here are some of the new 12.2 features which we can look forward to.

 

No more usage of SOURCEDEFS and ASSUMETARGETDEFS parameter –Metadata included as part of Trail File

In earlier versions if the structure of the table between the source and target database was different in terms of column names, data types and even column positions (among other things), we had to create a flat file which contained the table definitions and column mapping via the DEFGEN utility. Then we had to transfer this file to the target system.

If we used the parameter ASSUMETARGETDEFS, the assumption was that the internal structure of the target tables was the same as the source – which was not always the case – and we encountered issues

Now in 12.2, GoldenGate Trail Files are Self-Describing. Metadata information is included in the Trail Files called Table Definition Record (TDR) before the first occurrence of DML on that particular table and this TDR contains the table and column definition like the column number, data type, column length etc .

For new installations which will use the GoldenGate 12.2 software, metadata gets automatically populated in trail files by default. For existing installations we can use the parameter FORMAT RELEASE 12.2 and then any SOURCEDEFS or ASSUMETARGETDEFS parameters are no longer required or are ignored.

 

Automatic Heartbeat Table

In earlier versions, one of the recommendations to monitor lag was to create a heartbeat table.

Now in 12.2, there is a built-in mechanism to monitor replication lag. There is a new GGSCI command called ADD HEARTBEATTABLE .

This ADD HEARTBEATTABLE will automatically create the heartbeat tables and views as well as database jobs which updates heartbeat tables every 60 seconds.

One of the views created is called GG_LAG and it contains columns like INCOMING_LAG which will show the period of time between a remote database generating heartbeat and a local database receiving heartbeat.

Similarly to support an Active-Active Bi-Directional GoldenGate configuration, there is also a column called OUTGOING_LAG which is the period of time between local database generating heartbeat and remote database receiving heartbeat.

The GG_HEARTBEAT table is one of the main tables on which other heartbeat views are built and it will contain lag information for each component – Extract, Pump as well as Replicat. So we can quite easily identify where the bottleneck is when faced with diagnosing a GoldenGate performance issue.

Historical heartbeat and lag information is also maintained in tables like GG_LAG_HISTORY and GG_HEARTBEAT_HISTORY tables.

 

Parameter Files – checkprm , INFO PARAM, GETPARAMINFO

A new utility is available in 12.2 called checkprm which can be used to validate parameter files before they are deployed.

The INFO PARAM command will give us a lot of information about a particular parameter – like what is the default value and what are valid range of values. It is like accessing the online documentation from the GGSCI command line.

When a process like replicat or extract is running, we can use the SEND [process] GETPARAMINFO command to identify the runtime parameters – not only parameters included in the process parameter file, but also any other parameters the process has accessed which are say not included in the parameter file. Sometimes we are not aware of the many default parameters a process will use and this command will show this information real-time while the extract or replicat or manager is up and running.

 

Transparent Integration with Oracle Clusterware

In earlier releases, when we used the Grid Infrastructure Agent (XAG) to provide high availability capability for Oracle GoldenGate, we had to use the AGCTL to manage the GoldenGate instance like stop and start. If we used the GGSCI commands to start or stop the manager it could cause issues and the recommendation was to only use AGCTL and not GGSCI in that case.

Now in 12.2, once the GoldenGate instance has been registered with Oracle Clusterware using AGCTL, we can then continue to use GGSCI to start and stop GoldenGate without concern of any issues arising because AGCTL was not used. A new parameter for the GLOBALS file is now available called XAG_ENABLE.

 

Integration of GoldenGate with Datapump

In earlier releases when we added new tables to an existing GoldenGate configuration, we had to obtain the CURRENT_SCN from v$DATABASE view, pass that SCN value to the FLASHBACK_SCN parameter of expdp and then when we started the Replicat we had to use the AFTERCSN parameter with the same value.

Now in 12.2, the ADD TRANDATA or ADD SCHEMATRANDATA will prepare the tables automatically. Oracle Datapump export (expdp) will automatically generate import actions to set the instantiation CSN when that table is imported. We just have to include the new parameter for the Replicat called DBOPTIONS_ENABLE_INSTANTIATION_FILTERING which will then filter out any DML or DDL records based on the instantiation CSN of that table.

 

Improved Trail File Recovery

In earlier releases if a trail file was missing or corrupt, the Replicat used to abend.

Now in 12.2, if we have a corrupted or missing trail file, we can delete the corrupted trail file and the trail file is rebuilt by restarting the Extract Pump – the same is the case for a missing trail file which can be automatically rebuilt by bouncing the Extract Pump process. Replicat will automatically filter duplicate transactions by default to transactions already applied in the regenerated trail files.

 

Support for INVISIBLE Columns

The new MAPINVISIBLECOLUMNS parameter in 12.2 now enables replication support for tables (Oracle database only ) which contained any such INVISIBLE columns.

 

Extended Metrics and Fine-grained Performance Monitoring

Release 12.2 now provides real-time process and thread level Metrics for Extract, Pump and Replicat which  can be accessed through RESTful Web Services.  Real time database statistics for Extract and Replicat, Queues, as well as network statistics for the Extract Pump can be accessed using a URL like:

http://<hostname>:<manager port>/mpointsx

ENABLEMONITORING parameter needs to be included in the GLOBALS file.

The Java application is also available for free download (and can also be modified and customised) via the URL:

https://java.net/projects/oracledi/downloads/download/GoldenGate/OGGPTRK.jar

 

GoldenGate Studio

New in Release 12.2 is GoldenGate Studio – a GUI tool which will enable us to quickly design and deploy GoldenGate solutions. It separates the logical from the physical design and enables us to create a one-click and drag and drop logical design based on business needs without knowing all the details.

It has a concept of Projects and Solutions where one Project could contain a number of solutions and Solution contains one logical design and possibly many physical deployments. Rapid design is enabled with a number of out of the box Solution templates like Cascading, Bi-Directional, Unidirectional, Consolidation etc.

GoldenGate Studio enables us to design once and deploy it to many environments like Dev,Test, QA and Production with one click deployment.

 

GoldenGate Cloud Service

GoldenGate Cloud Service is the public cloud-based offering on a Subscription or Hourly basis.

The GoldenGate Cloud Service provides the delivery mechanisms to move Oracle as well as non-Oracle databases from On Premise to DBaaS – Oracle Database Cloud Service as well as Exadata Cloud Service delivery via GoldenGate. GoldenGate Cloud Service also provides Big Data Cloud Service delivery to Hadoop and NoSQL.

 

Nine Digit Trail File Sequence Length

In 12.2, the default is to create trail files with 9 digit sequence numbers instead of the earlier 6 digit sequence. This now will allow 1000 times more files per trail – basically 1 billion files per trail!.

We can upgrade existing trail files from 6 to 9 digit sequence numbers using a utility called convchk and there is also backward compatibility support for existing 6 digit sequences using a GLOBAL parameter called TRAIL_SEQLEN_6D.

Goldengate 12.2 New Feature Self-describing Trail Files

$
0
0

One of the top new features introduced in Oracle GoldenGate 12.2 is the Self-describing trail files feature.

What this means is that no more do we have to worry about differences in table structures in the source and target databases and no more do we have to use the defgen utility or even the parameters ASSUMETARGETDEFS or SOURCEDEFS which we had to do in the earlier releases.

So many of the manual steps have been eliminated.

Now GoldenGate 12.2 supports replication even if source and target have different structures or different databases for that matter.

Metadata information is now contained in the trail files!

We will have a look at this in more detail in our example below, but now the trail files contains two important pieces of information – Data Definition Record (DDR) and Table Definition Record (TDR).

Each trail file contains a Database Definition Record (DDR) before first occurrence of a DML record or a SEQUENCE from a particular database. The DDR contains database specific information like characterset, database name, type of database etc.

Also each trail file contains a Table Definition Record (TDR) before first occurrence of a DML record for a particular table and this TDR section will have the table and column definition and metadata  including column number, data types, column lengths and so on.

Example

Let us now create a test table on both the source as well as target database with different column names.

 

Source


SQL> create table system.test_ogg
  2  (emp_id number, first_name varchar2(20), last_name varchar2(20));

Table created.

SQL> alter table system.test_ogg
  2  add constraint pk_test_ogg primary key (emp_id);

Table altered.

 

Target

 

SQL> create table system.test_ogg
2 (emp_id number,f_name varchar(20),l_name varchar2(20));

Table created.

SQL> alter table system.test_ogg
2 add constraint pk_test_ogg primary key (emp_id);

Table altered.

 

Create the Extract and Pump processes on the source
 
Source

 

host1>./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 12.2.0.1.0 OGGCORE_12.2.0.1.0_PLATFORMS_151101.1925.2_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Nov 11 2015 03:53:23
Operating system character set identified as UTF-8.



GGSCI (host1 as oggsuser@DB01) 5> add extract etest integrated tranlog begin now                                                                                   
EXTRACT (Integrated) added.


GGSCI (host1 as oggsuser@DB01) 6> add exttrail ./dirdat/auxdit/lt extract                                                                                        etest
EXTTRAIL added.


GGSCI (host1 as oggsuser@DB01) 9> add extract ptest  exttrailsource ./dir                                                                                        dat/auxdit/lt
EXTRACT added.

GGSCI (host1 as oggsuser@DB01) 11> add rmttrail ./dirdat/bsstg/rt extract ptest
RMTTRAIL added.


GGSCI (host1 as oggsuser@DB01) 10> register extract etest database

2015-12-21 05:09:33  INFO    OGG-02003  Extract ETEST successfully registered with database at SCN 391450385.

 

Extract and Pump Parameter files


extract etest

USERIDALIAS oggsuser_bsstg

LOGALLSUPCOLS
UPDATERECORDFORMAT COMPACT


TRANLOGOPTIONS EXCLUDEUSER OGGSUSER
TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 2048, parallelism 2)

EXTTRAIL ./dirdat/auxdit/lt

WARNLONGTRANS 2h, CHECKINTERVAL 30m
REPORTCOUNT EVERY 15 MINUTES, RATE
STATOPTIONS  RESETREPORTSTATS
REPORT AT 23:59
REPORTROLLOVER AT 00:01 ON MONDAY
GETUPDATEBEFORES

TABLE SYSTEM.TEST_OGG;



EXTRACT ptest

USERIDALIAS oggsuser_bsstg

RMTHOST host2,  MGRPORT 7809 TCPBUFSIZE 200000000, TCPFLUSHBYTES 200000000, compress

RMTTRAIL ./dirdat/bsstg/rt

PASSTHRU

REPORTCOUNT EVERY 15 MINUTES, RATE

TABLE SYSTEM.TEST_OGG;

On the target create and start the replicat process

 
Target

 

GGSCI (host2) 2> add replicat rtest integrated exttrail ./dirdat/bsstg/rt
REPLICAT (Integrated) added.

 

Replicat parameter file – note NO parameter ASSUMETARGETDEFS


REPLICAT rtest

SETENV (ORACLE_HOME="/orasw/app/oracle/product/12.1.0/db_1")
SETENV (TNS_ADMIN="/orasw/app/oracle/product/12.1.0/db_1/network/admin")
SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")

USERIDALIAS oggsuser_auxdit


MAP SYSTEM.TEST_OGG, TARGET SYSTEM.TEST_OGG;

Start the Extract, Pump and Replicat processes
 

Source

 

GGSCI (host1 as oggsuser@DB01) 15> start manager
Manager started.


GGSCI (host1 as oggsuser@DB01) 16> start etest
EXTRACT ETEST starting


GGSCI (host1 as oggsuser@DB01) 17> start ptest

Sending START request to MANAGER ...
EXTRACT PTEST starting

 

Target

 

GGSCI (host2) 3> start rtest

Sending START request to MANAGER ...
REPLICAT RTEST starting


GGSCI (host2) 4> info rtest

REPLICAT   RTEST     Last Started 2015-12-21 05:21   Status RUNNING
INTEGRATED
Checkpoint Lag       00:00:00 (updated 00:08:53 ago)
Process ID           29864
Log Read Checkpoint  File ./dirdat/bsstg/rt000000000
                     First Record  RBA 0


 

On the source database insert a row into the TEST_OGG table

 

Source

 

SQL> insert into system.test_ogg
  2   values
  3   (007, 'JAMES','BOND');

1 row created.

SQL> commit;

Commit complete.

 

On the target we can see that the change has been replicated

 

Target

 

GGSCI (host2) 5> stats rtest latest

Sending STATS request to REPLICAT RTEST ...

Start of Statistics at 2015-12-21 05:26:32.


Integrated Replicat Statistics:

        Total transactions                                 1.00
        Redirected                                         0.00
        DDL operations                                     0.00
        Stored procedures                                  0.00
        Datatype functionality                             0.00
        Event actions                                      0.00
        Direct transactions ratio                          0.00%

Replicating from SYSTEM.TEST_OGG to SYSTEM.TEST_OGG:

*** Latest statistics since 2015-12-21 05:25:33 ***
        Total inserts                                      1.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

End of Statistics.



 

From the replicat report file we can see that definition for the TEST_OGG table was obtained via the GoldenGate trail file.

 

2015-12-21 05:25:22  INFO    OGG-06505  MAP resolved (entry SYSTEM.TEST_OGG): MAP "SYSTEM"."TEST_OGG", TARGET SYSTEM.TEST_OGG.

2015-12-21 05:25:33  INFO    OGG-02756  The definition for table SYSTEM.TEST_OGG is obtained from the trail file.

By using the logdump utility we can view the Database Definition Record (DDR) as well as Table Definition Record (TDR) information contained in the trail file.

DDR Version: 1
Database type: ORACLE
Character set ID: we8iso8859p1
National character set ID: UTF-16
Locale: neutral
Case sensitivity: 14 14 14 14 14 14 14 14 14 14 14 14 11 14 14 14
TimeZone: GMT-07:00
Global name: BSSTG

2015/12/21 05:25:18.534.893 Metadata             Len 277 RBA 1541
Name: SYSTEM.TEST_OGG
*
 1)Name          2)Data Type        3)External Length  4)Fetch Offset      5)Scale         6)Level
 7)Null          8)Bump if Odd      9)Internal Length 10)Binary Length    11)Table Length 12)Most Sig DT
13)Least Sig DT 14)High Precision  15)Low Precision   16)Elementary Item  17)Occurs       18)Key Column
19)Sub DataType 20)Native DataType 21)Character Set   22)Character Length 23)LOB Type     24)Partial Type
*
TDR version: 1
Definition for table SYSTEM.TEST_OGG
Record Length: 108
Columns: 3

EMP_ID       64     50        0  0  0 1 0     50     50     50 0 0 0 0 1    0 1   2    2       -1      0 0 0
FIRST_NAME   64     20       56  0  0 1 0     20     20      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
LAST_NAME    64     20       82  0  0 1 0     20     20      0 0 0 0 0 1    0 0   0    1       -1      0 0 0
End of definition


GoldenGate 12.2 supports INVISIBLE columns

$
0
0

Oracle Goldengate 12.2 now provides support for replication of tables with INVISIBLE columns which was not possible in earlier releases.

Let us look at an example.

We create a table on both the source as well as target databases with both an INVISIBLE and VIRTUAL column COMMISSION.

SQL>  create table system.test_ogg
  2   (empid number, salary number, commission number INVISIBLE generated always as (salary * .05) VIRTUAL );

Table created.

SQL>  alter table system.test_ogg
  2   add constraint pk_test_ogg primary key (empid);

Table altered.


Note that the column is not visible until we use the SET COLINVISIBLE ON command in SQL*PLUS.

SQL> desc  system.test_ogg
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMPID                                              NUMBER
 SALARY   


SQL> SET COLINVISIBLE ON

SQL> desc  system.test_ogg
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 EMPID                                              NUMBER
 SALARY                                             NUMBER
 COMMISSION (INVISIBLE)                             NUMBER

We now insert a row into the TEST_OGG table.

The value for the INVISIBLE and VIRTUAL column is derived based on the value of the SALARY column.

Note that the SELECT * command will not display the invisible column COMMISSION.

SQL> insert into system.test_ogg
  2  values
  3   (1001, 10000);

1 row created.

SQL> commit;

Commit complete.



SQL> select empid,salary,commission from system.test_ogg;

     EMPID     SALARY COMMISSION
---------- ---------- ----------
      1001      10000        500


SQL> select * from system.test_ogg;

     EMPID     SALARY
---------- ----------
      1001      10000

On the target GoldenGate environment we can see that the table structure information was contained and derived from the trail files as now in 12.2 table metadata is contained in the self-describing trail files and the parameters SOURCEDEFS or ASSUMETARGETDEFS are now no longer required in case the source and target database tables differ in structure.

2015-12-25 07:53:07  INFO    OGG-02756  The definition for table SYSTEM.TEST_OGG is obtained from the trail file.
Skipping invisible column COMMISSION in default map.
2015-12-25 07:53:07  INFO    OGG-06511  Using following columns in default map by name: EMPID, SALARY.

2015-12-25 07:53:07  INFO    OGG-06510  Using the following key columns for target table SYSTEM.TEST_OGG: EMPID.

On the target database we can see that the row has been replicated and the invisible column COMMISSION has been populated as well.

SQL> select empid,salary,commission from system.test_ogg;

     EMPID     SALARY COMMISSION
---------- ---------- ----------
      1001      10000        500

Tuning Integrated Replicat performance using EAGER_SIZE parameter

$
0
0

Is Oracle GoldenGate really designed for batch processing or “large” transactions? – not sure what the official Oracle take on this is but I would hazard a guess and say maybe no. Maybe that is something better suited to an ETL type of product like Oracle Data Integrator.

Goldengate considers a transaction to be large if it changes more than 15100 rows in a table (changed in version 12.2. It used to a value of 9500 in earlier versions)

An important parameter enforces how Goldengate applies these “large” transactions. It is called EAGER_SIZE.

In essence for Oracle GoldenGate it means when I see a large number of LCR’s in a transaction, do I start applying them straight away (that I guess is where the “eager” part of the parameter name is derived from) or do I wait for the entire transaction to be committed and only then start applying changes.

This “waiting” seems to serialize the apply process and adds to the apply lag on the target in a big way.

We can see from the test case (2) shown below, the apply lag more than doubled.

To illustrate this let us run a series of tests involving replication with source and target Oracle GoldenGate 12.2 environments located over 3000 KM from each other.

The test involves running a procedure which executes a series of INSERT and DELETE statements on a set of 10 tables. The load procedure generates 200 transactions which are executed in a 30 second period on the source database. These 200 transactions change in total over 2 million rows across the 10 tables.

Test 1) Maximum size of transaction is 10,000 rows

Test 2) Maximum size of transaction is 20,000 rows (EAGER_SIZE default value)

Test 3) Maximum size of transaction is 20,00 rows (EAGER_SIZE increased to 25000)

 
Apply Lag on the target database:
 

Test 1) ~ 20 seconds
Test 2) ~ 50 seconds
Test 3) ~ 20 seconds

 

Test 1

Note the maximum number of rows in a single transaction in this case is 10,000.

This is the code we are using in the procedure to generate the load test.

create or replace procedure sysadm.load_gen
IS
BEGIN
FOR i in 1 .. 10
LOOP
delete sysadm.myobjects1;
commit;
delete sysadm.myobjects2;
commit;
…
…

delete sysadm.myobjects10;
commit;
insert into sysadm.myobjects1
select * from all_objects where rownum < 10001;
commit;
insert into sysadm.myobjects2
select * from all_objects where rownum < 10001;
commit;

…..
…..
….

insert into sysadm.myobjects10
select * from all_objects where rownum < 10001;
commit;
end loop;
END;
/

When we kick off the load procedure in each of the 3 test cases on the source, we will see that for about 30 seconds all Apply Servers are idle.

So what is happening in this time?

• On source database the Log Mining server mines the redo log files, extract changes in the form of Logical Change Records which are then passed onto the Extract process which then writes then to the GoldenGate trail files.

• Trail files sent by the Extract Pump over the network to target

• Once trail files are received on the target server, the Replicat process will read the trail file and construct Logical Change Records.

• These LCR’s are sent to the target database where the Log Mining server will start various Apply processes – like the Receiver to receive the LCR’s , the Preparer and Co-ordinator which will sort transactions and organize them in terms of Primary and Foreign key dependencies and finally the Apply Server process which applies changes to the database.

Initially we see the Apply Server has started 8 individual processes because we set the PARALLELISM parameter to 8

SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         1 IDLE                                      0
         2 IDLE                                      0
         3 IDLE                                      0
         4 IDLE                                      0
         5 IDLE                                      0
         6 IDLE                                      0
         7 IDLE                                      0
         8 IDLE                                      0

Once the Apply Server detects additional load coming in, it will spawn additional processes on the fly. This is a big advantage of using Integrated Replicat over Classic or Co-ordinated replicat in that it is load aware and we do not have to manually allocate the number of Apply Servers or have to map an Apply Server to a table or set of target tables.

Note after a few seconds the Apply Servers start applying the received change and we now have the 9th Apply processe added to the earlier 8.

SQL> /

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         1 IDLE                                  50005
         2 IDLE                                  20002
         3 IDLE                                  30003
         4 IDLE                                      0
         5 IDLE                                      0
         6 IDLE                                      0
         7 IDLE                                      0
         8 IDLE                                      0

9 rows selected.

SQL> /

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         1 IDLE                                  50005
         2 IDLE                                  20002
         3 IDLE                                  30003
         4 IDLE                                      0
         5 IDLE                                      0
         6 IDLE                                      0
         7 IDLE                                      0
         8 IDLE                                      0

9 rows selected.


From the view V$GG_APPLY_SERVER we can see the state ‘EXECUTE TRANSACTION’ which shows Apply Servers are applying transactions in parallel.

 
SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         1 IDLE                                 140014
         2 EXECUTE TRANSACTION                  302634
         3 IDLE                                 270027
         4 EXECUTE TRANSACTION                  182775
         5 IDLE                                  60006
         6 EXECUTE TRANSACTION                  130013
         7 IDLE                                      0
         8 IDLE                                      0



SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         9 INACTIVE                                  0
         0 IDLE                                      0
         1 EXECUTE TRANSACTION                  187834
         2 EXECUTE TRANSACTION                  487708
         3 IDLE                                 330033
         4 EXECUTE TRANSACTION                  537853
         5 EXECUTE TRANSACTION                  177838
         6 EXECUTE TRANSACTION                  267948
         7 IDLE                                      0
         8 IDLE                                      0


Finally we see all the servers are idle - TOTAL_MESSAGES_APPLIED are about 2 million which is about equal to the number of rows changed.

Also note an additional (10th) apply server was also started while the Apply Server was applying changes to the target.

SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
        10 IDLE                                      0
         2 IDLE                                 360036
         3 IDLE                                 180018
         4 IDLE                                 280028
         9 INACTIVE                                  0
         5 IDLE                                 410041
         6 IDLE                                 200022
         1 IDLE                                 340034
         7 IDLE                                 220022
         8 IDLE                                  10001

 


Test 2

Now we run the same load test.

While the number of transactions and number of rows being changed remains the same, we have increased the number of rows in a single transaction to 20,000 (from earlier 10,000).

So we change the procedure code as shown below and reduce the number of iterations in the loop from 10 to 5 to keep the volume of rows changed the same as before.

insert into sysadm.myobjects1
select * from all_objects where rownum < 10001;
commit;

TO

insert into sysadm.myobjects1
select * from all_objects where rownum < 20001;
commit;

Now we can see that at any given time only one Apply server is in a state of Execute Transaction – all the rest are idle or in state of WAIT DEPENDENCY or sometimes we will also see the state WAIT FOR NEXT CHUNK.

If we query the database performance views or Top Activity performance page in OEM or ASH Analytics as shown below, we will see the Wait Event REPL: Apply Dependency showing up.


 

We can see that the Apply Server process of the Integrated Replicat RBSPRD1 is what is responsible mainly for that particular Wait Event.


&nmsp;

 

SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         8 IDLE                                      0
         9 IDLE                                      0
        10 IDLE                                      0
         1 WAIT DEPENDENCY                      450026
         2 EXECUTE TRANSACTION                  229333
         3 WAIT DEPENDENCY                      460025
         4 IDLE                                 340017
         5 WAIT DEPENDENCY                      220012
         6 IDLE                                      0
         7 IDLE                                      0

SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         8 IDLE                                      0
         9 IDLE                                      0
        10 IDLE                                      0
         1 EXECUTE TRANSACTION                  455418
         2 WAIT DEPENDENCY                      230014
         3 WAIT DEPENDENCY                      460025
         4 IDLE                                 340017
         5 IDLE                                 240012
         6 IDLE                                      0
         7 IDLE                                      0


SQL>  select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         8 IDLE                                      0
         9 IDLE                                      0
        10 IDLE                                      0
         1 WAIT DEPENDENCY                      470027
         2 WAIT DEPENDENCY                      230014
         3 EXECUTE TRANSACTION                  476575
         4 IDLE                                 340017
         5 WAIT DEPENDENCY                      240013
         6 IDLE                                      0
         7 IDLE                                      0

 
Test 3

We now run the same load procedure, but we add a new parameter EAGER_SIZE to the replicat parameter file .

Since the size of the biggest transaction is now 20,000 rows we need to set the EAGER_SIZE to a higher value than that.

For example:

DBOPTIONS INTEGRATEDPARAMS(PARALLELISM 8, EAGER_SIZE 25000)

Note that increasing the EAGER_SIZE would put additional memory requirements on the STREAMS_POOL_SIZE.

Now we see that again we have Apply Servers executing transactions in parallel and there are no servers in the state of WAIT DEPENDENCY.

SQL> select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         3 EXECUTE TRANSACTION                  207829
         9 IDLE                                      0
         8 IDLE                                      0
         4 EXECUTE TRANSACTION                       0
         5 IDLE                                      0
         6 IDLE                                      0
         1 EXECUTE TRANSACTION                  227498
         7 IDLE                                      0
         2 EXECUTE TRANSACTION                  160008


SQL> select server_id,STATE ,TOTAL_MESSAGES_APPLIED from  v$gg_apply_server;

 SERVER_ID STATE                TOTAL_MESSAGES_APPLIED
---------- -------------------- ----------------------
         3 EXECUTE TRANSACTION                  227717
         9 IDLE                                      0
         8 IDLE                                      0
         4 EXECUTE TRANSACTION                   67601
         5 IDLE                                      0
         6 IDLE                                      0
         1 EXECUTE TRANSACTION                  268900
         7 IDLE                                      0
         2 EXECUTE TRANSACTION                  308590


Goldengate 12.2 New Feature – Check and validate parameter files using chkprm

$
0
0

In GoldenGate 12.2 we can now validate parameter files before deployment.

There is a new utility called chkprm which can be used for this purpose.’

To run the chkprm utility we provide the name of the parameter file and can optionally indicate what process this parameter file belongs to using the COMPONENT keyword.

Let us look at an example.

 

ors-db-01@oracle:omprd1>./checkprm ./dirprm/eomprd1.prm --COMPONENT EXTRACT

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable ORACLE_HOME=/orasw/app/oracle/product/12.1.0/db_1.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable ORACLE_SID=omprd2.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable TNS_ADMIN=/orasw/app/oracle/product/12.1.0/db_1/network/admin.

2016-01-21 21:53:13  INFO    OGG-02095  Successfully set environment variable NLS_LANG=AMERICAN_AMERICA.AL32UTF8.

(eomprd1.prm) line 13: Parsing error, [DYNAMICRESOLUTION] is deprecated.

(eomprd1.prm) line 22: Parameter [REPORTDETAIL] is not valid for this configuration.

2016-01-21 21:53:13  INFO    OGG-10139  Parameter file ./dirprm/eomprd1.prm:  Validity check: FAIL.


We can see that this parameter file has failed the validation check because we had used this line in the parameter file and REPORTDETAIL is not supported now in 12.2.

STATOPTIONS REPORTDETAIL, RESETREPORTSTATS

We changed the parameter file to include

STATOPTIONS RESETREPORTSTATS

and now run the chkprm utility again. We now see that the verification of the parameter file has completed successfully.


ors-db-01@oracle:BSSTG1>./checkprm ./dirprm/eomprd1.prm

2015-11-18 19:29:45  INFO    OGG-10139  Parameter file ./dirprm/eomprd1.prm:  Validity check: PASS.

Runtime parameter validation is not reflected in the above check.


GoldenGate 12.2 New Feature – INFO and GETPARAMINFO

$
0
0

New in Oracle GoldenGate 12.2 is the feature to detailed help about the usage of a particular parameter (INFO) as well as information about the active parameters associated with a running Extract, Replicat as well as Manager process (GETPARAMINFO)

 

INFO
 
In this example we see all the information about the use of the parameter PORT

GGSCI (qa008 as oggsuser@BSSTG1) 12> info param port

param name : port
description : TCP IP port number for the Manager process
argument : integer
default : 7809
range : 1 – 65535
options :
component(s): MGR
mode(s) : none
platform(s) : all platforms
versions :
database(s) : all supported databases (on the supported platforms).
status : current
mandatory : false
dynamic : false
relations : none

 
GETPARAMINFO
 
In this example we see both the default values used by a running extract as well as the actual parameters which the process is using.

GGSCI (qa008 as oggsuser@BSSTG1) 19> send extract etest getparaminfo

Sending GETPARAMINFO request to EXTRACT ETEST …

GLOBALS

enablemonitoring :

/orasw/app/ogg12.2/dirprm/etest.prm

extract : etest
useridalias : oggsuser_bsstg
logallsupcols :
updaterecordformat : COMPACT
tranlogoptions :
integratedparams : (max_sga_size 2048, parallelism 2)
excludeuser : OGGSUSER
exttrail : ./dirdat/bsstg/test/lt
discardfile : ./dirrpt/etest.dsc
append :
megabytes : 1000
warnlongtrans : 2 hour(s)
checkinterval : 30 minute(s)
reportcount :
every : 15 minute(s)
rate :
statoptions :
resetreportstats :
report :
AT : 23:59
reportrollover :
AT : 00:01
ON : MONDAY
getupdatebefores :
table : TEST.*

Default Values

deletelogrecs :
fetchoptions :
userowid :
usekey :
missingrow : ALLOW
usesnapshot :
uselatestversion :
maxfetchstatements : 100
usediagnostics :
detaileddiagnostics :
diagnosticsonall :
nosuppressduplicates :
flushsecs : 1
passthrumessages :
ptkcapturecachemgr :
ptkcaptureift :
ptkcapturenetwork :
ptkcapturequeuestats :
ptkspstats :
tcpsourcetimer :
tranlogoptions :
bufsize : 1024000
asynctransprocessing : 300
checkpointretentiontime : 7.000000
failovertargetdestid : 0
getctasdml :
minefromsnapshotstby :
usenativeobjsupport :
retrydelay : 60
allocfiles : 500
allowduptargetmap :
binarychars :
checkpointsecs : 10 second(s)
cmdtrace : OFF
dynamicresolution :
eofdelay : 1
eofdelaycsecs : 100
functionstacksize : 200
numfiles : 1000
ptkcapturetablestats :
ptkmaxtables : 100
ptktablepollfrequency : 1
statoptions :
reportfetch :
varwidthnchar :
enableheartbeat :
ptkcaptureprocstats :
ptkmonitorfrequency : 1
use_traildefs :
.

 
GGSCI (qa008 as oggsuser@BSSTG1) 21> send etest getparaminfo tranlogoptions

Sending getparaminfo request to EXTRACT ETEST …

/orasw/app/ogg12.2/dirprm/etest.prm

tranlogoptions :
integratedparams : (max_sga_size 2048, parallelism 2)
excludeuser : OGGSUSER

Default Values

tranlogoptions :
bufsize : 1024000
asynctransprocessing : 300
checkpointretentiontime : 7.000000
failovertargetdestid : 0
getctasdml :
minefromsnapshotstby :
usenativeobjsupport :


Configuring a Downstream Capture database for Oracle GoldenGate

$
0
0

Oracle GoldenGate versions 11.2 and above enables downstream capture of data from a single source or multiple sources. This feature is specific to Oracle databases only. This feature helps customers meet their IT requirement of limiting new processes being installed on their production source system.

This feature requires some configuration of log transport to the downstream database on the source system. It also requires an open read-write downstream database which is where the Integrated Extract will be installed.

 


Integrated Capture Deployment Options

There are two deployment options for integrated capture, depending on where the mining database is deployed. The mining database is the one where the logmining server is deployed.

Local deployment: For local deployment, the source database and the mining database are the same. The source database is the database for which you want to mine the redo stream to capture changes, and also where you deploy the logmining server. Because integrated capture is fully integrated with the database, this mode does not require any special database setup.

Downstream deployment: In downstream deployment, the source and mining databases are different databases. You create the logmining server at the downstream database. You configure redo transport at the source database to ship the redo logs to the downstream mining database for capture at that location. Using a downstream mining server for capture may be desirable to offload the capture overhead and any other overhead from transformation or other processing from the production server, but requires log shipping and other configuration.

Downstream deployment allows you to offload the source database. The source database ships its redo logs to a downstream database, and Extract uses the logmining server at the downstream database to mine the redo logs.

When online logs are shipped to the downstream database, real-time capture by Extract is possible. Changes are captured as though Extract is reading from the source logs. In order to accept online redo logs from a source database, the downstream mining database must have standby redo logs configured.

 

Here is a high-level overview of the process.

• Changes occurring on the source database are written to the Online Redo log files by the database Log Writer background process (LGWR).

• Changes to Online Redo Log files are also written to the Archive Log Files.

• When each Archive Log File fills up it is shipped or sent via Redo Transport services to the target Downstream database where it is received by the RFS process running on the Downstream database end.

• The Downstream database can be configured with Standby Log files which will receive redo data as soon as a transaction is committed on the source database. The RFS (remote file server) process writes changes to the Standby Redo Log files. This is real-time apply.

• If the Downstream database has not been configured with Standby redo log files, then the RFS will only receive changes once the entire Archived redo log file is filled on the source.

• The Log Mining Server engine running on the Downstream database will extract these changes in the form of Logical Change Records (LCR’s) and these are handed over to the Goldengate Integrated Extract process.

• The Goldengate Integrated Extract process will write the changes to the Goldengate trail files.

• The Trail files are then sent to the target database where the Goldengate Replicat process will finally apply the changes to the target database.

 

Read the note on How to Configure a Downstream Capture Database for Oracle GoldenGate

Oracle GoldenGate 12.2 New Feature – Integration with Oracle Datapump

$
0
0

In earlier versions when we had to do an Oracle database table instantiation or initial load, we had to perform a number of steps – basically to handle DML changes which were occurring on the source table while the export was in progress.

So we had to first ensure that there were no open or long running transactions in progress. Then obtain the Current SCN of the database – pass this SCN to the FLASHBACK_SCN parameter of the Export Datapump. Then after the import was over we had to ensure that we used the HANDLECOLLISIONS parameter initially for the replicat and also start the Replicat from a particular position in the trail using the AFTERCSN parameter.

Now with Goldengate 12.2, there is tighter integration with Oracle Datapump Export and Import.

The ADD SCHEMATRANDATA command with the PREPARECSN parameter will ensure that the Datapump export will have information about the instantiation CSN’s for each table part of the export – this will populate the system tables and views with instantiation CSNs on the import and further the new Replicat parameter DBOPTIONS ENABLE_INSTANTIATION_FILTERING will filter out DML and DDL records based on the table’s instantiation CSN.

Let us look at an example of this new 12.2 feature.

We have a table called TESTME in the SYSADM schema which initially has 266448 rows.

Before running the Datapump export, let us ‘prepare’ the tables via the PREPARECSN parameter of the ADD SCHEMATRANDATA command.

GGSCI (pcu008 as oggsuser@BSDIT1) 12> add schematrandata sysadm preparecsn
2015-12-10 06:38:58 INFO OGG-01788 SCHEMATRANDATA has been added on schema sysadm.
2015-12-10 06:38:58 INFO OGG-01976 SCHEMATRANDATA for scheduling columns has been added on schema sysadm.
2015-12-10 06:38:59 INFO OGG-10154 Schema level PREPARECSN set to mode NOWAIT on schema sysadm.

GGSCI (pcu008 as oggsuser@omqat41) 3> info schematrandata SYSADM
2015-12-13 07:21:55 INFO OGG-06480 Schema level supplemental logging, excluding non-validated keys, is enabled on schema SYSADM.
2015-12-13 07:21:55 INFO OGG-01980 Schema level supplemental logging is enabled on schema SYSADM for all scheduling columns.
2015-12-13 07:21:55 INFO OGG-10462 Schema SYSADM have 571 prepared tables for instantiation.

We run the Datapump export. Note the line :

“FLASHBACK automatically enabled to preserve database integrity.”

pcu008@oracle:BSSTG1>expdp directory=BACKUP_DUMP_DIR dumpfile=testme.dmp tables=sysadm.testme
Export: Release 12.1.0.2.0 – Production on Mon Jan 25 23:45:27 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bitProduction
With the Partitioning, Real Application Clusters, Automatic Storage Management,OLAP,
Advanced Analytics and Real Application Testing options
FLASHBACK automatically enabled to preserve database integrity.
Starting “SYS”.”SYS_EXPORT_TABLE_01?: sys/******** AS SYSDBA directory=BACKUP_DUMP_DIR dumpfile=testme.dmp tables=sysadm.testme
Estimate in progress using BLOCKS method…
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 28 MB
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
. . exported “SYSADM”.”TESTME” 26.86 MB 266448 rows
Master table “SYS”.”SYS_EXPORT_TABLE_01? successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TABLE_01 is:
/home/oracle/backup/testme.dmp
Job “SYS”.”SYS_EXPORT_TABLE_01? successfully completed at Mon Jan 25 23:46:45 2016 elapsed 0 00:00:49

While the export of the TESTME table is in progress, we will insert 29622 more rows into the table. The table will now have 296070 rows.

SQL> insert into sysadm.testme select * from dba_objects;
29622 rows created.

SQL> select count(*) from sysadm.testme;
COUNT(*)
———-
296070

SQL> commit;
Commit complete.

We perform the import on the target database next. Note the number of rows imported. So we do not have the 29622 rows which were inserted into the table while export is in progress.

qat408@oracle:BSSTG1>impdp directory=BACKUP_DUMP_DIR dumpfile=testme.dmp full=y
Import: Release 12.1.0.2.0 – Production on Mon Jan 25 23:51:42 2016
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Username: sys as sysdba
Password:
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 – 64bitProduction
With the Partitioning, Real Application Clusters, Automatic Storage Management,OLAP,
Advanced Analytics and Real Application Testing options
Master table “SYS”.”SYS_IMPORT_FULL_01? successfully loaded/unloaded
import done in AL32UTF8 character set and AL16UTF16 NCHAR character set
export done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set
WARNING: possible data loss in character set conversions
Starting “SYS”.”SYS_IMPORT_FULL_01?: sys/******** AS SYSDBA directory=BACKUP_DUMP_DIR dumpfile=testme.dmp full=y
Processing object type TABLE_EXPORT/TABLE/PROCACT_INSTANCE
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. . imported “SYSADM”.”TESTME” 26.86 MB 266448 rows
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/MARKER
Job “SYS”.”SYS_IMPORT_FULL_01? successfully completed at Mon Jan 25 23:52:22 2016 elapsed 0 00:00:25

We start the Replicat process on the target – note we are not positioning the replicat liked we used to do earlier using the AFTERCSN command.


GGSCI (qat408 as oggsuser@BSSTG2) 7> start rbsstg1
Sending START request to MANAGER …
REPLICAT RBSSTG1 starting

After starting the replicat, if we look at the report file for the replicat, we can see that the Replicat process is aware of the SCN or CSN number existing in the database while the export was in progress and it knows that any DML or DDL changes post that SCN now need to be applied on the target table.

2016-01-25 23:56:59 INFO OGG-10155 Instantiation CSN filtering is enabled on table SYSADM.TESTME at CSN 402,702,624.

If we query the replicat statistics a while after the replicat has started, we can see that the replicat has applied the insert statement (29622 rows) which was running while the export of the table was in progress.

GGSCI (qat408 as oggsuser@BSSTG1) 12> stats rbsstg1 latest

Sending STATS request to REPLICAT RBSSTG1 …

Start of Statistics at 2016-01-26 00:14:55.

Integrated Replicat Statistics:

Total transactions 1.00
Redirected 0.00
DDL operations 0.00
Stored procedures 0.00
Datatype functionality 0.00
Event actions 0.00
Direct transactions ratio 0.00%

Replicating from SYSADM.TESTME to SYSADM.TESTME:

*** Latest statistics since 2016-01-26 00:05:19 ***
Total inserts 29622.00
Total updates 0.00
Total deletes 0.00
Total discards 0.00
Total operations 29622.00

End of Statistics.


How to configure high availability for Oracle GoldenGate on Exadata

$
0
0

This note describes the procedure used to configure high availability for Oracle GoldenGate 12.2 on Oracle Database Machine (Exadata X5-2) using Oracle Database File System (DBFS), Oracle Clusterware and Oracle Grid Infrastructure Agent.

 

The note also describes how we can create different DBFS file systems on the same Exadata compute node if we would like to host a number of different environments like development,test or staging on same Exadata box and would like to have different GoldenGate software installations for each environment.

Read the note …..
 

GoldenGate INSERTALLRECORDS and OGG-01154 SQL error 1400

$
0
0

The Goldengate INSERTALLRECORDS commands can be used in cases where the requirement is to have on the target database a transaction history or change data capture (CDC) tables which will keep a track of changes a table undergoes at the row level.

So every INSERT, UPDATE or DELETE statement on the source tables is captured as INSERT statements on the target database

But in certain cases update statements issued on the source database can cause the replicat process to abend with an error:

“ORA-01400: cannot insert NULL”.

This can happen when the table has not null columns that have not been updated and when the update is converted to an insert, the trail file will not have values for those columns so the insert will use nulls and consequently fail with the ORA-1400 error.


Test Case

We create two tables – SYSTEM.MYTABLES in the source database and SYSTEM.MYTABLES_CDC in the target database.

The SYSTEM.MYTABLES_CDC table in the target will have two additional columns for maintaining the CDC or Transaction history – OPER_TYPE capture the type of DML operation on the table and CHANGE_DATE which will capture the timestamp information of when the change took place.

We create a Primary Key constraint on the source table – note, the target table will have no similar constraints as rows will be inserted all the time into the CDC table regardless of whether the DML statement on the source was an INSERT, UPDATE or DELETE.

SQL> create table system.mytables
  2  (owner VARCHAR2(30) NOT NULL,
  3   table_name VARCHAR2(30) NOT NULL,
  4  tablespace_name VARCHAR2(30) NOT NULL,
  5  logging VARCHAR2(3) NOT NULL);

Table created.

SQL> alter table system.mytables add constraint pk_mytables primary key (owner,table_name);

Table altered.


SQL SYS@euro> create table system.mytables_cdc
  2  (owner VARCHAR2(30) NOT NULL,
  3    table_name VARCHAR2(30) NOT NULL,
  4  tablespace_name VARCHAR2(30) NOT NULL,
  5  logging VARCHAR2(3) NOT NULL,
  6  oper_type VARCHAR2(20),
  7  change_date TIMESTAMP);

Table created.

We now issue the ADD TRANDATA GGSCI command.

Note issuing the ADD TRANDATA command will enable supplemental logging at the table level for PK columns, UK columns and FK columns – not ALL columns.



GGSCI (ogg2.localdomain as oggsuser@sourcedb) 64> dblogin useridalias oggsuser_sourcedb
Successfully logged into database.

GGSCI (ogg2.localdomain as oggsuser@sourcedb) 65> add trandata system.mytables

Logging of supplemental redo data enabled for table SYSTEM.MYTABLES.
TRANDATA for scheduling columns has been added on table 'SYSTEM.MYTABLES'.
GGSCI (ogg2.localdomain as oggsuser@sourcedb) 66> info trandata system.mytables

Logging of supplemental redo log data is enabled for table SYSTEM.MYTABLES.

Columns supplementally logged for table SYSTEM.MYTABLES: OWNER, TABLE_NAME.


We can query the DBA_LOG_GROUPS view to get information about the supplemental logging added for the the table MYTABLES.

The ADD TRANDATA command has created a supplmental log group called GGS_729809 and we can see that supplemental logging is enabled for all columns part of a primary key, unique key or foreign key constraint.


SQL> SELECT
  2  LOG_GROUP_NAME,
  3   TABLE_NAME,
  4  DECODE(ALWAYS, 'ALWAYS', 'Unconditional','CONDITIONAL', 'Conditional') ALWAYS,
  5  LOG_GROUP_TYPE
  6  FROM DBA_LOG_GROUPS
  7   WHERE TABLE_NAME='MYTABLES' AND OWNER='SYSTEM';

no rows selected

SQL> /

                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
GGS_72909            MYTABLES        Unconditional  USER LOG GROUP
SYS_C009814          MYTABLES        Unconditional  PRIMARY KEY LOGGING
SYS_C009815          MYTABLES        Conditional    UNIQUE KEY LOGGING
SYS_C009816          MYTABLES        Conditional    FOREIGN KEY LOGGING



SQL> select LOG_GROUP_NAME,COLUMN_NAME from DBA_LOG_GROUP_COLUMNS
  2  where OWNER='SYSTEM' and TABLE_NAME='MYTABLES'
  3  order by 1,2;


Log Group            COLUMN_NAME
-------------------- ------------------------------
GGS_72909            OWNER
GGS_72909            TABLE_NAME


Let us now test the case.

We insert some rows into the source table MYTABLES – these rows are replicated fine to the target table MYTABLES_CDC.


SQL> insert into system.mytables
  2  select OWNER,TABLE_NAME,TABLESPACE_NAME,LOGGING
  3   from DBA_TABLES
  4   where OWNER='SYSTEM' and TABLESPACE_NAME is NOT NULL;

110 rows created.

SQL> commit;

Commit complete.



SQL SYS@euro> select count(*) from system.mytables_cdc;

  COUNT(*)
----------
       110


Let us now see what happens when we run an UPDATE statement on the source database. Note the columns involved in the UPDATE are not PK or UK columns.


SQL> update system.mytables set tablespace_name='USERS' where tablespace_name='SYSTEM';

89 rows updated.

SQL> commit;

Commit complete.


Immediately we will see that the Replicat process on the target has ABENDED an d if we examine the Replicat report log we can see the error message as shown below.

2016-06-25 14:40:26  INFO    OGG-06505  MAP resolved (entry SYSTEM.MYTABLES): MAP "SYSTEM"."MYTABLES", TARGET SYSTEM.MYTABLES_CDC, COLMAP (USEDEFAULTS, CHANGE_DATE=@GETENV ('GGHEADER', 'COM
MITTIMESTAMP'), OPER_TYPE=@GETENV ('GGHEADER', 'OPTYPE')).

2016-06-25 14:40:46  WARNING OGG-06439  No unique key is defined for table MYTABLES_CDC. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may
be used to define the key.
Using the following default columns with matching names:
  OWNER=OWNER, TABLE_NAME=TABLE_NAME, TABLESPACE_NAME=TABLESPACE_NAME, LOGGING=LOGGING

2016-06-25 14:40:46  INFO    OGG-06510  Using the following key columns for target table SYSTEM.MYTABLES_CDC: OWNER, TABLE_NAME, TABLESPACE_NAME, LOGGING, OPER_TYPE, CHANGE_DATE.


2016-06-25 14:45:18  WARNING OGG-02544  Unhandled error (ORA-26688: missing key in LCR) while processing the record at SEQNO 7, RBA 19037 in Integrated mode. REPLICAT will retry in Direct m
ode.

2016-06-25 14:45:18  WARNING OGG-01154  SQL error 1400 mapping SYSTEM.MYTABLES to SYSTEM.MYTABLES_CDC OCI Error ORA-01400: cannot insert NULL into ("SYSTEM"."MYTABLES_CDC"."LOGGING") (statu
s = 1400), SQL .


There is a column called LOGGING which is a NOT NULL column – the GoldenGate trail file has information about the other columns – OWNER, TABLE_NAME and TABLESPACE_NAME.

But there is no data captured in the trail file for the LOGGING column.

Using the LOGDUMP utility we can see this.

Logdump 103 >open ./dirdat/rt000007
Current LogTrail is /ogg/euro/dirdat/rt000007
Logdump 104 >ghdr on
Logdump 105 >detail on
Logdump 106 >detail data
Logdump 107 >pos 32008
Reading forward from RBA 32008
Logdump 108 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    52  (x0034)   IO Time    : 2016/06/25 14:45:02.999.764
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x02)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         67       AuditPos   : 8056764
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/06/25 14:45:02.999.764 FieldComp            Len    52 RBA 32008
Name: SYSTEM.MYTABLES
After  Image:                                             Partition 4   G  e
 0000 000a 0000 0006 5359 5354 454d 0001 0015 0000 | ........SYSTEM......
 0011 4c4f 474d 4e52 5f50 4152 414d 4554 4552 2400 | ..LOGMNR_PARAMETER$.
 0200 0900 0000 0555 5345 5253                     | .......USERS
Column     0 (x0000), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     1 (x0001), Len    21 (x0015)
 0000 0011 4c4f 474d 4e52 5f50 4152 414d 4554 4552 | ....LOGMNR_PARAMETER
 24                                                | $
Column     2 (x0002), Len     9 (x0009)
 0000 0005 5553 4552 53                            | ....USERS


The table has not null columns that have not been updated (column LOGGING was not part of the update statement).

If the column was not updated on the update statement, when the update is converted to an insert, the trail file will not have values for that column and so the insert will use nulls and consequently fail with the ORA-1400, so this is an expected behavior.

We can see that the update on source database is converted into an insert statement on the target – this is because of the INSERTALLRECORDS parameter we are using in the Replicat parameter file.

.

So the solution is that we need to enable supplemental logging for ALL columns at the source database table.

We will now add supplemental log data to all columns

SQL> alter table system.mytables add supplemental log data (ALL) columns;

Table altered.

Note the DBA_LOG_GROUPS view as well as the ADD TRANDATA command now shows all the columns have supplemental logging enabled.


SELECT
 LOG_GROUP_NAME,
  TABLE_NAME,
 DECODE(ALWAYS, 'ALWAYS', 'Unconditional','CONDITIONAL', 'Conditional') ALWAYS,
 LOG_GROUP_TYPE
  FROM DBA_LOG_GROUPS
  WHERE TABLE_NAME='MYTABLES' AND OWNER='SYSTEM';
SQL>   2    3    4    5    6    7
                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
GGS_72909            MYTABLES        Unconditional  USER LOG GROUP
SYS_C009814          MYTABLES        Unconditional  PRIMARY KEY LOGGING
SYS_C009815          MYTABLES        Conditional    UNIQUE KEY LOGGING
SYS_C009816          MYTABLES        Conditional    FOREIGN KEY LOGGING
SYS_C009817          MYTABLES        Unconditional  ALL COLUMN LOGGING


GGSCI (ogg2.localdomain as oggsuser@sourcedb) 12> info trandata system.mytables

Logging of supplemental redo log data is enabled for table SYSTEM.MYTABLES.

Columns supplementally logged for table SYSTEM.MYTABLES: ALL.


SQL> alter system switch logfile;

System altered.

Note: STOP and RESTART the Extract and Pump

Note the position where the Extract pump was writing to.

GGSCI (ogg2.localdomain as oggsuser@sourcedb) 28> info pext1 detail

EXTRACT    PEXT1     Last Started 2016-06-25 15:04   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           31081
Log Read Checkpoint  File ./dirdat/lt000012
                     2016-06-25 15:05:16.927851  RBA 1476

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/rt                                          9       1522        100 RMTTRAIL


Delete and recreate the Integrated Replicat

GGSCI (ogg1.localdomain as oggsuser@euro) 2> delete replicat rep2

2016-06-25 15:07:11  WARNING OGG-02541  Replicat could not process some SQL errors before being dropped or unregistered. This may cause the data to be out of sync.

2016-06-25 15:07:14  INFO    OGG-02529  Successfully unregistered REPLICAT REP2 inbound server OGG$REP2 from database.
Deleted REPLICAT REP2.


GGSCI (ogg1.localdomain as oggsuser@euro) 3> add replicat rep2 integrated exttrail ./dirdat/rt
REPLICAT (Integrated) added.

Restart the replicat from the point where it had abended

GGSCI (ogg1.localdomain as oggsuser@euro) 4> alter rep2 extseqno 9 extrba 1522

2016-06-25 15:07:55  INFO    OGG-06594  Replicat REP2 has been altered through GGSCI. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start REP2 with NOFILTERDUPTRANSACTION option.

REPLICAT (Integrated) altered.

Now run a similar update statement which earlier had caused the replicat to abend


SQL> update system.mytables set tablespace_name='SYSTEM'  where tablespace_name='USERS';

89 rows updated.

SQL> commit;

Commit complete.

We can see that this time the replicat has successfully applied the changes on the target table – 89 rows which were updated on the source table have now been transformed into 89 INSERT statements in the CDC table on the target database.

GGSCI (ogg1.localdomain as oggsuser@euro) 14> stats replicat rep2 table SYSTEM.MYTABLES_CDC latest

Sending STATS request to REPLICAT REP2 ...

Start of Statistics at 2016-06-25 15:11:59.

.....
......


Replicating from SYSTEM.MYTABLES to SYSTEM.MYTABLES_CDC:

*** Latest statistics since 2016-06-25 15:11:09 ***
        Total inserts                                     89.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                  89.00

End of Statistics.

If we now examine the trail file on the target, we can see that this time all the table columns including the LOGGING column (which was missing earlier) has been captured in the trail file

Logdump 109 >open ./dirdat/rt000009
Current LogTrail is /ogg/euro/dirdat/rt000009
Logdump 110 >ghdr on
Logdump 111 >detail on
Logdump 112 >detail data
Logdump 113 >pos 1522
Reading forward from RBA 1522
Logdump 114 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    56  (x0038)   IO Time    : 2016/06/25 15:10:52.999.941
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x00)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         68       AuditPos   : 186384
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/06/25 15:10:52.999.941 FieldComp            Len    56 RBA 1522
Name: SYSTEM.MYTABLES
After  Image:                                             Partition 4   G  b
 0000 000a 0000 0006 5359 5354 454d 0001 000d 0000 | ........SYSTEM......
 0009 4d59 4f42 4a45 4354 5300 0200 0a00 0000 0653 | ..MYOBJECTS........S
 5953 5445 4d00 0300 0700 0000 0359 4553           | YSTEM........YES
Column     0 (x0000), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     1 (x0001), Len    13 (x000d)
 0000 0009 4d59 4f42 4a45 4354 53                  | ....MYOBJECTS
Column     2 (x0002), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     3 (x0003), Len     7 (x0007)
 0000 0003 5945 53                                 | ....YES

Note the data in the CDC table on the target

SQL SYS@euro>  select tablespace_name,oper_type from system.mytables_cdc
  2   where TABLE_NAME ='MYTABLES';

TABLESPACE_NAME                OPER_TYPE
------------------------------ --------------------
SYSTEM                         INSERT
USERS                          SQL COMPUPDATE
SYSTEM                         SQL COMPUPDATE

Oracle 12c Resource Manager – CDB and PDB resource plans

$
0
0

In a CDB since we have multiple pluggable databases sharing a set of common resources, we can prevent multiple workloads to compete with each other for both system as well as CDB resources by using Resource Manager.

Let us look at an example of managing resources for Pluggable Databases (between PDB’s) at the multitenant Container database level as well as within a particular PDB.

The same can be achieved using 12c Cloud Control, but displayed here are the steps to be performed at the command line using the DBMS_RESOURCE_MANAGER package.

With Resource Manager at the Pluggable Database level, we can limit CPU usage of a particular PDB as well as the number of parallel execution servers which a particular PDB can use.

To allocate resources among PDB’s we use a concept of shares where we assign shares to particular PDB’s and a higher share to a PDB results in higher allocation of guaranteed resources to that PDB.

At a high level the steps involved include:

• Create a Pending Area

• Create a CDB resource plan

• Create directives for the PDB’s

• Optionally update the default directives which will specify resources which any newly created PDB’s will be allocated or which will be used when no directives have been explicitly defined for a particular PDB

• Optionally update the directives which apply by default to the Automatic Maintenance Tasks which are configured to run in the out of the box maintenance windows

• Validate the Pending Area

• Submit the Pending Area

• Enable the plan at the CDB level by setting the RESOURCE_MANAGER_PLAN parameter

Let us look at an example.

We have 5 Pluggable databases contained in the Container database and we wish to enable resource management at the PDB level.

We wish to guarantee CPU allocation in the ratio 4:3:1:1:1 so that the CPU is distributed among the PDB’s in this manner:

PDBPROD1 : 40%
PDBPROD2: 30%
PDBPROD3: 10%
PDBPROD4 : 10%
PDBPROD5: 10%

Further for PDB’s PDBPROD3, PDBPROD4 and PDBPROD5 we wish to ensure that CPU utilization for these 3 PDB’s never crosses the 70% limit.

Also for these 3 PDB’s we would like to limit the maximum number of parallel execution servers available to the PDB.

The value of 70% means that if the PARALLEL_SERVERS_TARGET initialization parameter is 200, then the PDB cannot use more than a maximum of 140 parallel execution servers. For PDBPROD1 and PDBPROD2 there is no limit, so they can use all 200 parallel execution servers if available.

We also want to limit the resources used by the Automatic Maintenance Tasks jobs when they do execute in a particular job window and also want to specify a default resource allocation limit for newly created PDB’s or those PDB’s where a resource limit directive has not been explicitly defined.

Download the note …

Viewing all 232 articles
Browse latest View live