Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

GoldenGate Bounded Recovery

$
0
0

The Oracle online redo log files contain both committed as well as uncommitted transactions, but Oracle GoldenGate only writes committed transactions to the trail files. So the question which can be asked is what happens to the transactions which are not committed or especially what happens to those uncommitted long running transactions.

Sometimes long running transactions in batch jobs can take several hours to complete. So until the long running transaction is not completed or committed how will GoldenGate handle the situation where an extract is reading from a particular online redo log file when the transaction starts and then with the passage of time other DML activity in the database causes that particular online redo log file to be archived – and then maybe that archive log file is not available on disk because the nightly RMAN backup job has deleted the archive log files from disk after the backup completes.

So GoldenGate has two kinds of recovery – Normal Recovery where the extract process needs all the archive log files starting from the current recovery read checkpoint of the extract and Bounded Recovery which is what we will discuss here with an example.

In very simple terms there is a Bounded Recovery (BR) Interval for an extract which by default is 4 hours and every 4 hours the extract process will makes a Bounded Recovery checkpoint. At every BR interval GoldenGate will check for any long running transactions which are older than the BR interval (which defaults to 4 hours) and writes information about the current state as well as data of the extract to disk – which again by default is the BR sub-directory in the GoldenGate software home location. This will continue at every BR interval until the long running transaction is committed or a rollback is performed.

In our extract parameter file we use the BR BRINTERVAL command :

BR BRINTERVAL 20M

 

This is what the official documentation states:

The use of disk persistence to store and then recover long-running transactions enables Extract to manage a situation that rarely arises but would otherwise significantly (adversely) affect performance if it occurred. The beginning of a long-running transaction is often very far back in time from the place in the log where Extract was processing when it stopped. A long-running transaction can span numerous old logs, some of which might no longer reside on accessible storage or might even have been deleted. Not only would it take an unacceptable amount of time to read the logs again from the start of a long-running transaction but, since long-running transactions are rare, most of that work would be the redundant capture of other transactions that were already written to the trail or discarded. Being able to restore the state and data of persisted long-running transactions eliminates that work.

 

In this example we will see how BR works by setting the BR Interval of the extract to a low value of 20 minutes and perform a INSERT statement which we do not commit. When we first issued the INSERT statement the extract process is reading from a particular online redo log (sequence #14878).

We switch some redo log files to simulate activity in the database and then backup and delete archivelog sequence 14878. We can see at every 20 minute interval Bounded Recovery is being performed and information that the extract needs about the long running transaction is being written to the BR directory on disk. Even though the archive log file is not present on disk, the extract process does not need that archive log file and uses the Bounded Recovery data which is present in the BR directory to then write data to the trail files when the long running transaction is finally committed.

We issue this INSERT statement and we will not commit the transaction - this is our test long-running transaction.

 

SQL> insert into myobjects
select object_id,object_name,object_type from dba_objects;

75372 rows created.

 

Check the online redo log sequence the extract is currently reading from – in this case it is 14878

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 2> info ext1

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:08 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:10:21 Seqno 14878, RBA 5936128
SCN 0.9137531 (9137531)

 

Using the SEND EXTRACT SHOWTRANS command, we can identify any transactions in progress or open

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 4> send ext1 showtrans

Sending SHOWTRANS request to EXTRACT EXT1 …

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 14878, RBA 116752

————————————————————
XID: 10.16.1533
Items: 75372
Extract: EXT1
Redo Thread: 1
Start Time: 2014-06-21:18:10:14
SCN: 0.9137521 (9137521)
Redo Seq: 14878
Redo RBA: 116752
Status: Running

 

The INFO EXTRACT SHOWCH command gives us more information about the extract checkpoint information. Basically the position in the source (redo log/transaction logs) where it is reading from and the position in the target (trail file) where it is currently writing to.

It shows us the redo log file (or archive log file) which the extract first read when it started up (Startup Checkpoint) which is 14861.

It shows us the position of the oldest unprocessed transaction in the online redol log /archive redo logs files (Recovery Checkpoint) which is 14878 SCN and SCN 9137521.

Finally it shows us the current position in the online redo log file in terms of SCN where the extract last read a record (Current Checkpoint) which is sequence 14878 but the SCN had advanced to 9137612 because of some other activity in the database.

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 5> info ext1 showch

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:06 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:11:41 Seqno 14878, RBA 5977088
SCN 0.9137612 (9137612)

Current Checkpoint Detail:

Read Checkpoint #1

Oracle Redo Log

Startup Checkpoint (starting position in the data source):
Thread #: 1
Sequence #: 14861
RBA: 5918224
Timestamp: 2014-06-21 16:49:33.000000
SCN: 0.9129707 (9129707)
Redo File: /u01/app/oracle/fast_recovery_area/GGATE1/archivelog/2014_06_21/o1_mf_1_14861_9tbo7pys_.arc

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 14878
RBA: 116752
Timestamp: 2014-06-21 18:10:14.000000
SCN: 0.9137521 (9137521)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 14878
RBA: 5977088
Timestamp: 2014-06-21 18:11:41.000000
SCN: 0.9137612 (9137612)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

Write Checkpoint #1

GGS Log Trail

Current Checkpoint (current write position):
Sequence #: 3
RBA: 8130790
Timestamp: 2014-06-21 18:11:44.414364
Extract Trail: ./dirdat/zz
Trail Type: RMTTRAIL

 

After some time (more than 20 minutes) we issue the same SHOWCH command and let us look at the differences we see in the output of the command as compared to the previous SHOWCH.

We can see that because of database activity the extract is now reading from the online redo log sequence 14884 (earlier it was 14878).

But what has remained unchanged is the Recovery Checkpoint which is the oldest redo log sequence that the extract needs to access when the long running transaction which is currently in progress is finally committed.

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 2> info ext1 showch

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:04 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:40:34 Seqno 14884, RBA 72704
SCN 0.9139491 (9139491)

Current Checkpoint Detail:

Read Checkpoint #1

Oracle Redo Log

Startup Checkpoint (starting position in the data source):
Thread #: 1
Sequence #: 14861
RBA: 5918224
Timestamp: 2014-06-21 16:49:33.000000
SCN: 0.9129707 (9129707)
Redo File: /u01/app/oracle/fast_recovery_area/GGATE1/archivelog/2014_06_21/o1_mf_1_14861_9tbo7pys_.arc

Recovery Checkpoint (position of oldest unprocessed transaction in the data source):
Thread #: 1
Sequence #: 14878
RBA: 116752
Timestamp: 2014-06-21 18:10:14.000000
SCN: 0.9137521 (9137521)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

Current Checkpoint (position of last record read in the data source):
Thread #: 1
Sequence #: 14884
RBA: 72704
Timestamp: 2014-06-21 18:40:34.000000
SCN: 0.9139491 (9139491)
Redo File: /u01/app/oracle/oradata/ggate1/redo03.log

 

We also see important information related to the Bounded Recovery (BR) Checkpoint via the INFO EXTRACT SHOWCH command.

As earlier mentioned we had changed the BR interval for this example to 20 minutes from the default value of 4 hours, so every 20 minutes at the BR interval (in this case will 18:07, 18:27. 18:47 and so on ), information about the current state and data of the extract wil be written to disk in the BR sub-directory.

So we see that at 18:27 BR interval, the BR checkpoint had persisted information from the redo log sequence 14881 to disk. So if there was a failure or if the extract was restarted, it will not need any redo log files or archive log files prior to sequence 14881

 

BR Previous Recovery Checkpoint:
Thread #: 0
Sequence #: 0
RBA: 0
Timestamp: 2014-06-21 18:07:35.982719
SCN: Not available
Redo File:

BR Begin Recovery Checkpoint:
Thread #: 0
Sequence #: 14878
RBA: 116752
Timestamp: 2014-06-21 18:10:14.000000
SCN: 0.9137521 (9137521)
Redo File:

BR End Recovery Checkpoint:
Thread #: 1
Sequence #: 14881
RBA: 139776
Timestamp: 2014-06-21 18:27:38.000000
SCN: 0.9138688 (9138688)
Redo File:

 

We can see that some files have been created in the BR directory for extract EXT1

 

GGSCI (kens-orasql-001-dev.corporateict.domain) 4> info ext1

EXTRACT EXT1 Last Started 2014-06-21 18:07 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:06 ago)
Process ID 15190
Log Read Checkpoint Oracle Redo Logs
2014-06-21 18:41:35 Seqno 14884, RBA 131072
SCN 0.9139583 (9139583)

GGSCI (kens-orasql-001-dev.corporateict.domain)

GGSCI (kens-orasql-001-dev.corporateict.domain) 3> shell ls -l ./BR/EXT1

total 20
-rw-r—– 1 oracle oinstall 65536 Jun 21 18:27 CP.EXT1.000000015
drwxr-x— 2 oracle oinstall 4096 Jun 19 17:07 stale

 

So what happens if we delete the old archive log sequence 14878 from disk. Since the BR checkpoint has already persisted information about the long running transaction which was contained in 14878 sequence to disk, it should not need to access this older archive log file.

To test this we take a backup of archive log sequence 14878 and then delete it. Remember this was the redo log sequence the extract was writing to when the long-running transaction first started.

 

RMAN> backup archivelog sequence 14878 delete input;

Starting backup at 21-JUN-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=24 device type=DISK
channel ORA_DISK_1: starting archived log backup set
channel ORA_DISK_1: specifying archived log(s) in backup set
input archived log thread=1 sequence=14878 RECID=30497 STAMP=850846396
channel ORA_DISK_1: starting piece 1 at 21-JUN-14
channel ORA_DISK_1: finished piece 1 at 21-JUN-14
piece handle=/u01/app/oracle/fast_recovery_area/GGATE1/backupset/2014_06_21/o1_mf_annnn_TAG20140621T234659_9tcb7msp_.bkp tag=TAG20140621T234659 comment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
channel ORA_DISK_1: deleting archived log(s)
archived log file name=/u01/app/oracle/fast_recovery_area/GGATE1/archivelog/2014_06_21/o1_mf_1_14878_9tbpowlm_.arc RECID=30497 STAMP=850846396
Finished backup at 21-JUN-14

 

Let us now finally commit the long-running transaction.

 

SQL> insert into myobjects
2 select object_id,object_name,object_type from dba_objects;

75372 rows created.

SQL> commit;

Commit complete.

 

In the Extract EXT1 report, we can see information about the long-running transaction as well as the Bounded Recovery Checkpoint and we can see that every 20 minutes the redo log sequence for which the Bounded Recovery Checkpoint is happening is getting incremented.

 

2014-06-21 18:17:42 WARNING OGG-01027 Long Running Transaction: XID 10.16.1533, Items 75372, Extract EXT1, Redo Thread 1, SCN 0.9137521 (9137521), Redo Seq #14878, R
edo RBA 116752.

2014-06-21 18:27:41 INFO OGG-01971 The previous message, ‘WARNING OGG-01027′, repeated 1 times.

2014-06-21 18:27:41 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p23540_extr: start=SeqNo: 14878, RBA: 116752, SCN: 0.9137521 (9137521), Timest
amp: 2014-06-21 18:10:14.000000, end=SeqNo: 14881, RBA: 139776, SCN: 0.9138688 (9138688), Timestamp: 2014-06-21 18:27:38.000000, Thread: 1.

2014-06-21 18:47:50 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p23540_extr: start=SeqNo: 14885, RBA: 144912, SCN: 0.9139983 (9139983), Timest
amp: 2014-06-21 18:47:47.000000, Thread: 1, end=SeqNo: 14885, RBA: 145408, SCN: 0.9139983 (9139983), Timestamp: 2014-06-21 18:47:47.000000, Thread: 1.

2014-06-21 19:07:59 INFO OGG-01738 BOUNDED RECOVERY: CHECKPOINT: for object pool 1: p23540_extr: start=SeqNo: 14889, RBA: 176144, SCN: 0.9141399 (9141399), Timest
amp: 2014-06-21 19:07:56.000000, Thread: 1, end=SeqNo: 14889, RBA: 176640, SCN: 0.9141399 (9141399), Timestamp: 2014-06-21 19:07:56.000000, Thread: 1.

 

So a point to keep in mind:

If we are using the Bounded Recovery interval with the default value of 4 hours, then ensure that we keep on disk at least at a minimum archive log files for the past 8 hours to cater for anylong running transactions.


GoldenGate Director Security

$
0
0

The GoldenGate Director (Server and Client) is part of the Oracle GoldenGate Management pack suite of products.

Let us see how security is managed in the Director.

We launch the Director Administration tool on Unix via the run-admin.sh shell script.

If we are using Oracle WebLogic Server 12c and above the default admin user is ‘diradmin’ and for other releases it is ‘admin’.

When we create a user via the Director Admin tool it creates a WebLogic domain user in the backround and we will see this in the example when we connect using the WebLogic Administration Console.
 
 


   

After creating a user we then have to create a Data Source and here is where we define the security layer.

A Data Source is essentially where we define the connection details to a particular instance of GoldenGate like the manager port and host where the manager is running, the GoldenGate version and operating system and also the database username and password used by the GoldenGate schema.

In the Access Control section of the interface screen, we have a few options.

If we leave the Owner field blank, then it means that the Data Source in the Director Client will be visible as well as manageable by all other admin users.

If we explicitly define an owner for the Data Source by selecting one of the users we had earlier created (or the default out of the box users like diradmin or admin), then the Data Source in the Director Client will only be visible to that particular user. If another user connects to Director Client, they will not see that Data Source.

The next option is to define an owner for the Data Source and click the Host is Observable check box. That means that users other than the owner will be able to see the Data Source in Director Client and will be able to see the extract and replicat processes associated with that Data Source , but will not be able to perform any administrative type activity like start or stop extract/replicat, modify parameter files or even use the GGSCI interface to connect to the Golden Gate instance associated with that particular Data Source.
 


 

What happens if we want some more fine grained access control in the director security and control which Data Sources are visisble as well as manageable by which Director Admin users. We do this at the WebLogic end of things. Remember when we install the GoldenGate Director, we need to have an existing WebLogic Server environment and a domain for GoldenGate Director is created and managed by that WebLogic Server.

We have two admin users usera and userb which we have created using the Director Admin utility. We do not want usera to be able to perform any administrative type tasks in the GoldenGate environment via the Director Client but should just be able to view the environment while userb has full access.

We launch the WebLogic Server Administration Console (note the out-of-box usernane and password is weblogic)

If we click on the Security Realms link, we see that the installation has created a realm called ggRealm.
 


  

Click on ggRealm link and expand Users and Groups tab. We will see a list of weblogic users. We had earlier created admin users (usera and userb) in Director Administration utility and we see that a WebLogic Server users havealso been created as well.

Let us see the groups this user usera is currently a member of – in this case only chosen group for usera is the group User.
 

Now connect as usera using the Director Client.
 


   

We can see that while the Data Sources are visible, they have a lock symbol attached to them meaning that usera can only see the processes associated with the Data Source when he drags the data source to the Diagram panel. He cannot create, modify, start or stop any of the extract or replicat processes associated with that Data Source.

Even in GGSCI tab, we see that he cannot connect to any of the associated GoldenGate instances as none are available.
 


   

Go back to the WebLogic Administration Console and make userb a member of the Admin group.
 


   

Now when we connect as userb in the Director Client, all the Data Sources are visible and none are locked and if we use the GGSCI tab we can see in the drop-down list we can connect to all the Data Sources via GGSCI
 


 

12c RMAN New Feature – Cross Platform Data Transport Using Incremental Backups

$
0
0

In Oracle 12c we can now transport data across platforms using full as well as incremental backup sets.

The use of RMAN Incremental backups can significantly reduce overall application downtime during cross-platform data migration and will be useful in any 11g to 12c future upgrades.

In this example we look at migrating data from an Oracle 11g R2 database on Windows platform to an Oracle 12c container database hosted on a Linux platform using incremental backup sets.

Note that since both the source (Windows) and target (Linux) platforms have the same little endian format , we do not have to do any format conversion in this case. Otherwise we would have to use the RMAN CONVERT command to perform the conversion on either the source or the target database.

Let us look at the steps involved.

Create the tablespaces and the test table and index in the source 11g Windows database

SQL> create tablespace tts_data
  2  datafile 'C:\ORADATA\ORCL2\TTS_DATA01.DBF' size 20m;

Tablespace created.

SQL> create tablespace tts_ind
  2  datafile 'C:\ORADATA\ORCL2\TTS_IND01.DBF' size 10m;

Tablespace created.



SQL> alter user sh quota unlimited on tts_data;

User altered.

SQL> alter user sh quota unlimited on tts_ind;

User altered.

SQL> create table sh.mycustomers
  2  tablespace tts_data
  3  as select * from sh.customers
  4  where rownum < 1001; Table created. SQL> create index sh.mycustomers_ind
  2  on sh.mycustomers (cust_id)
  3  tablespace tts_ind;

Index created.

At this stage the tablespace is still in READ/WRITE mode

RMAN> backup  incremental level 0 format 'C:\ORADATA\ORCL2\BKP_TS_LEV0_%U' tablespace tts_data, tts_ind;

Starting backup at 21-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=223 device type=DISK
channel ORA_DISK_1: starting incremental level 0 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=C:\ORADATA\ORCL2\TTS_DATA01.DBF
input datafile file number=00008 name=C:\ORADATA\ORCL2\TTS_IND01.DBF
channel ORA_DISK_1: starting piece 1 at 21-JUL-14
channel ORA_DISK_1: finished piece 1 at 21-JUL-14
piece handle=C:\ORADATA\ORCL2\BKP_TS_LEV0_04PDVJFU_1_1 tag=TAG20140721T190742 co
mment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JUL-14

Copy the RMAN backupset pieces from the Windows OS server to the Linux OS server

Connect to the Oracle 12c Container database – pluggable database PDB3 and restore the copied backupset

[oracle@orasql-001-dev ~]$ rman target sys/passwd@pdb3

Recovery Manager: Release 12.1.0.1.0 - Production on Mon Jul 21 19:11:48 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)

RMAN> restore from platform 'Microsoft Windows x86 64-bit' foreign datafile 7 format '/u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf' from backupset '/home/oracle/BKP_TS_LEV0_04PDVJFU_1_1';

Starting restore at 21-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=284 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00007
channel ORA_DISK_1: reading from backup piece /home/oracle/BKP_TS_LEV0_04PDVJFU_1_1
channel ORA_DISK_1: restoring foreign file 7 to /u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf
channel ORA_DISK_1: foreign piece handle=/home/oracle/BKP_TS_LEV0_04PDVJFU_1_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 21-JUL-14

RMAN>  restore from platform 'Microsoft Windows x86 64-bit' foreign datafile 8 format '/u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf' from backupset '/home/oracle/BKP_TS_LEV0_04PDVJFU_1_1';

Starting restore at 21-JUL-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file 00008
channel ORA_DISK_1: reading from backup piece /home/oracle/BKP_TS_LEV0_04PDVJFU_1_1
channel ORA_DISK_1: restoring foreign file 8 to /u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf
channel ORA_DISK_1: foreign piece handle=/home/oracle/BKP_TS_LEV0_04PDVJFU_1_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
Finished restore at 21-JUL-14

Note the datafiles have been restored in the required location

[oracle@orasql-001-dev ~]$ ls -l /u01/app/oracle/oradata/condb1/pdb3/tts*
-rw-r----- 1 oracle oinstall 20979712 Jul 21 19:13 /u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf
-rw-r----- 1 oracle oinstall 10493952 Jul 21 19:14 /u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf

Now update some rows in the source Windows 11g database

SQL> update sh.mycustomers
  2  set cust_city='Perth';

1000 rows updated.

SQL> commit;

Commit complete.

This time take a RMAN level 1 incremental backup

RMAN> backup  incremental level 1 format 'C:\ORADATA\ORCL2\BKP_TS_LEV1_%U' table
space tts_data, tts_ind;

Starting backup at 21-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=223 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=C:\ORADATA\ORCL2\TTS_DATA01.DBF
input datafile file number=00008 name=C:\ORADATA\ORCL2\TTS_IND01.DBF
channel ORA_DISK_1: starting piece 1 at 21-JUL-14
channel ORA_DISK_1: finished piece 1 at 21-JUL-14
piece handle=C:\ORADATA\ORCL2\BKP_TS_LEV1_05PDVK3E_1_1 tag=TAG20140721T191806 co
mment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JUL-14

Again copy the level 1 incremental backupset pieces from the Windows server to the Linux server and initiate a recovery in the 12c environment.

[oracle@orasql-001-dev ~]$ rman target sys/passwd@pdb3

Recovery Manager: Release 12.1.0.1.0 - Production on Mon Jul 21 19:11:48 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

connected to target database: CONDB1 (DBID=3738773602)



RMAN> recover from platform 'Microsoft Windows x86 64-bit' foreign datafilecopy  '/u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf' from backupset '/home/oracle/BKP_TS_LEV1_05PDVK3E_1_1';

Starting restore at 21-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=266 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file /u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/BKP_TS_LEV1_05PDVK3E_1_1
channel ORA_DISK_1: foreign piece handle=/home/oracle/BKP_TS_LEV1_05PDVK3E_1_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 21-JUL-14

RMAN> recover from platform 'Microsoft Windows x86 64-bit' foreign datafilecopy  '/u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf' from backupset '/home/oracle/BKP_TS_LEV1_05PDVK3E_1_1';

Starting restore at 21-JUL-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file /u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/BKP_TS_LEV1_05PDVK3E_1_1
channel ORA_DISK_1: foreign piece handle=/home/oracle/BKP_TS_LEV1_05PDVK3E_1_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 21-JUL-14

Now on the source 11g Windows database we will make the tablespace read only. From this point onwards we will have a brief outage just to take a final incremental backup and a data pump export of the tablespace metadata.

Remember in versions prior to 12c we would have had to make the tablespaces read only right at the very beginning of the exercise and our outage would have been much longer in that case.

SQL> alter tablespace tts_data read only;

Tablespace altered.

SQL> alter tablespace tts_ind read only;

Tablespace altered.

C:\windows\system32>rman target /

Recovery Manager: Release 11.2.0.1.0 - Production on Mon Jul 21 20:19:20 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: ORCL2 (DBID=834251529)

RMAN> backup  incremental level 1 format 'C:\ORADATA\ORCL2\BKP_TS_LEV1_%U' table
space tts_data, tts_ind;

Starting backup at 21-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=223 device type=DISK
channel ORA_DISK_1: starting incremental level 1 datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00007 name=C:\ORADATA\ORCL2\TTS_DATA01.DBF
input datafile file number=00008 name=C:\ORADATA\ORCL2\TTS_IND01.DBF
channel ORA_DISK_1: starting piece 1 at 21-JUL-14
channel ORA_DISK_1: finished piece 1 at 21-JUL-14
piece handle=C:\ORADATA\ORCL2\BKP_TS_LEV1_06PDVNMG_1_1 tag=TAG20140721T201928 co
mment=NONE
channel ORA_DISK_1: backup set complete, elapsed time: 00:00:01
Finished backup at 21-JUL-14


C:\windows\system32>expdp directory=data_pump_dir dumpfile=tts_exp.dmp logfile=t
ts_exp.log transport_tablespaces=tts_data,tts_ind

Export: Release 11.2.0.1.0 - Production on Mon Jul 21 20:21:02 2014

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_TRANSPORTABLE_01":  sys/******** AS SYSDBA directory=
data_pump_dir dumpfile=tts_exp.dmp logfile=tts_exp.log transport_tablespaces=tts
_data,tts_ind
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Master table "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_TRANSPORTABLE_01 is:
  C:\APP\GAVIN\ADMIN\ORCL2\DPDUMP\TTS_EXP.DMP
******************************************************************************
Datafiles required for transportable tablespace TTS_DATA:
  C:\ORADATA\ORCL2\TTS_DATA01.DBF
Datafiles required for transportable tablespace TTS_IND:
  C:\ORADATA\ORCL2\TTS_IND01.DBF
Job "SYS"."SYS_EXPORT_TRANSPORTABLE_01" successfully completed at 20:21:48



SQL> alter tablespace tts_data read write;

Tablespace altered.

SQL> alter tablespace tts_ind read write;

Tablespace altered.

Copy the data pump export dump file and the final RMAN incremental backupset piece from source Windows to target Linux server and perform a recovery.

RMAN> recover from platform 'Microsoft Windows x86 64-bit' foreign datafilecopy  '/u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf' from backupset '/home/oracle/BKP_TS_LEV1_06PDVNMG_1_1';

Starting restore at 21-JUL-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=36 device type=DISK

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file /u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/BKP_TS_LEV1_06PDVNMG_1_1
channel ORA_DISK_1: foreign piece handle=/home/oracle/BKP_TS_LEV1_06PDVNMG_1_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
Finished restore at 21-JUL-14



RMAN> recover from platform 'Microsoft Windows x86 64-bit' foreign datafilecopy  '/u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf' from backupset '/home/oracle/BKP_TS_LEV1_06PDVNMG_1_1';

Starting restore at 21-JUL-14
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring foreign file /u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf
channel ORA_DISK_1: reading from backup piece /home/oracle/BKP_TS_LEV1_06PDVNMG_1_1
channel ORA_DISK_1: foreign piece handle=/home/oracle/BKP_TS_LEV1_06PDVNMG_1_1
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:03
Finished restore at 21-JUL-14

Lastly perform a data pump import of the tablespace metadata and make the tablespaces read write

[oracle@orasql-001-dev pdb3]$ impdp '"sys@pdb3 as sysdba"' directory=mytest dumpfile=TTS_EXP.DMP transport_datafiles='/u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf','/u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf'

Import: Release 12.1.0.1.0 - Production on Mon Jul 21 21:38:22 2014

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.
Password:

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options
Master table "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully loaded/unloaded
Source TSTZ version is 11 and target TSTZ version is 18.
Source timezone version is +00:00 and target timezone version is -07:00.
Starting "SYS"."SYS_IMPORT_TRANSPORTABLE_01":  "sys/********@pdb3 AS SYSDBA" directory=mytest dumpfile=TTS_EXP.DMP transport_datafiles=/u01/app/oracle/oradata/condb1/pdb3/tts_data01.dbf,/u01/app/oracle/oradata/condb1/pdb3/tts_ind01.dbf
Processing object type TRANSPORTABLE_EXPORT/PLUGTS_BLK
Processing object type TRANSPORTABLE_EXPORT/TABLE
Processing object type TRANSPORTABLE_EXPORT/INDEX
Processing object type TRANSPORTABLE_EXPORT/INDEX_STATISTICS
Processing object type TRANSPORTABLE_EXPORT/POST_INSTANCE/PLUGTS_BLK
Job "SYS"."SYS_IMPORT_TRANSPORTABLE_01" successfully completed at Mon Jul 21 21:38:37 2014 elapsed 0 00:00:10


SQL> alter session set container=pdb3;

Session altered.

SQL>  alter tablespace tts_data read write;

Tablespace altered.

SQL> alter tablespace tts_ind  read write;

Tablespace altered.

Verify that the table has the last update statement which we had executed on the source database

SQL> select count(*) from sh.mycustomers;

  COUNT(*)
----------
      1000

SQL> select distinct cust_city from sh.mycustomers;

CUST_CITY
------------------------------
Perth

12c Cloud Control Security – Dynamic Groups and Roles

$
0
0

Let us have a look at some of the security features in 12c Cloud Control and we will look at Roles and Dynamic Groups.

Let us say for example we have a team of DBA’s supporting both MS SQL Server as well as Oracle databases and the 12c agents have now been deployed to all the Oracle as well as SQL Server hosts.

The requirement is that when the SQL Server DBA’s connect via 12c Cloud Control they should only see the SQL Server target hosts and databases and when the Oracle DBA’s connect likewise they see all the target servers where the Oracle databases are hosted.

We first create a Dynamic Group. From the Setup >> Add Target >> Dynamic Group menu

Enter the group name MSSQL_DBA_GROUP and click the Privilege Propagation box

Click on the Define Membership Criteria button

In the Target Types field select Microsoft SQL Server from the list of target values



Create a similar group called ORA_DBA_GROUP

We can see that depending on the targets we have already discovered on the hosts where the agents have been deployed the Oracle and SQL Server database target members are automatically added to their respective groups.

Next we create a couple of roles – MSSQL_DBA_ROLE and ORA_DBA_ROLE.

From the Setup >> Security >> Roles menu click on Create button.

Create a role called MSSQL_DBA

Click on Next and in the Target Privileges section click on Add
In the Target Type select Group and select the MSSQL_DBA_GROUP

Click on the pencil icon in the Manage Target Privilege Grants and change that from View to Full

Click on Review and then Finish

Similarly create another role called ORA_DBA_ROLE and ensure that we select the ORA_DBA_GROUP this time.

Grant the roles we have created to the administators based on the type of databases they support and wish to view as targets in Cloud Control.

From the Setup >> Security >> Administrators menu select the administrator account and click on Edit.

Click Next and in the Roles screen select the appropriate role which we had created earler

If connect as this admin user, we only see the Oracle database targets displayed.

But if we connect as the Sysman user, we can see all the targets – both Oracle as well as SQL Server

How to download the 12c Cloud Control Agent software

$
0
0

Note that in Oracle 12c Cloud Control, we cannot download the agent software from the online Oracle download software web site – we have to provision the same via the 12c OMS framework.

In this example we will download the 12c agent software for Windows 32-bit – our OMS is hosted on a Linux platform.

From the Setup >> Extensibility menu select Self Update

We see that we are not connected online to MOS and are using the offline method for the self update.

Note the software library in OEM has not been refreshed recently.

ag1

Click on Check Updates

ag2

On OMS server – execute the following commands


[oracle@vmnapp01 bin]$ ./emcli login -username="sysman" -password="xxx"
Login successful

[oracle@vmnapp01 bin]$ ./emcli import_update_catalog -file=/app/oracle/Middleware/oms/p9348486_112000_Generic.zip –omslocal

Processing catalog for Middleware Profiles and Gold Images
Processing update: Middleware Profiles and Gold Images - Three Fusion Middleware Provisioning Profiles with different heap size configuration
Processing catalog for Agent Software
Processing update: Agent Software - Agent Software (12.1.0.4.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.3.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.2.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.1.0) for Microsoft Windows (32-bit)
Processing update: Agent Software - Agent Software (12.1.0.4.0) for HP-UX PA-RISC (64-bit)

….
……

Successfully uploaded the Self Update catalog to Enterprise Manager. Use the Self Update Console to view and manage updates.

We now see that the refresh has happened successfully and the refresh time has been updated

ag3

Select the Microsoft Windows (32-bit) row and click on Download

ag4

Enter URL in web browser and download the file

https://updates.oracle.com/Orion/Services/download/p18797166_112000_Generic.zip?aru=17700668&patch_file=p18797166_112000_Generic.zip

Copy the downloaded file to the OMS host

On OMS server run

[oracle@csmsdc-vmnapp01 bin]$ ./emcli import_update -omslocal -file=/var/tmp/p18797166_112000_Generic.zip
Processing update: Agent Software - Agent Software (12.1.0.4.0) for Microsoft Windows (32-bit)

Successfully uploaded the update to Enterprise Manager. Use the Self Update Console to manage this update.

We see that the status has changed from Available to Downloaded

ag6

Highlight the Microsoft Windows (32-bit) row and click Apply

ag7

Note status has now changed to Applied

ag8

We now see that Windows (32-bit) now shows up in the list of supported agent12c downloadable platforms

[oracle@vmnapp01 bin]$ ./emcli get_supported_platforms
-----------------------------------------------
Version = 12.1.0.4.0
Platform = Microsoft Windows (32-bit)
-----------------------------------------------
Version = 12.1.0.4.0
Platform = Linux x86-64
-----------------------------------------------
Version = 12.1.0.1.0
Platform = Microsoft Windows (32-bit)
-----------------------------------------------
Platforms list displayed successfully.

Download from OMS to directory on the Linux server. From here we can copy it to the Windows 32-bir machines where we wish to deploy the agent.


[oracle@vmnapp01 bin]$ which zip
/usr/bin/zip
[oracle@vmnapp01 bin]$ export ZIP_LOC=/usr/bin/zip



[oracle@vmnapp01 bin]$ ./emcli get_agentimage -destination=/app/oracle/Middleware/oms/agent_software -platform="Microsoft Windows (32-bit)" -version="12.1.0.4.0"

=== Partition Detail ===
Space free : 206 GB
Space required : 1 GB
Check the logs at /app/oracle/Middleware/gc_inst/em/EMGC_OMS1/sysman/emcli/setup/.emcli/get_agentimage_2014-08-27_15-16-20-PM.log
Downloading /app/oracle/Middleware/oms/agent_software/12.1.0.4.0_AgentCore_912.zip
File saved as /app/oracle/Middleware/oms/agent_software/12.1.0.4.0_AgentCore_912.zip
Downloading /app/oracle/Middleware/oms/agent_software/12.1.0.4.0_PluginsOneoffs_912.zip
File saved as /app/oracle/Middleware/oms/agent_software/12.1.0.4.0_PluginsOneoffs_912.zip
Downloading /app/oracle/Middleware/oms/agent_software/unzip
File saved as /app/oracle/Middleware/oms/agent_software/unzip
Agent Image Download completed successfully.

Oracle 12c In-Memory Database

$
0
0

While data warehouses do have their own place, most OLTP databases today are still used to run analytical and DSS type queries and workloads to support real-time decision-enabling information requirement which the business has. Designing a database to cater for this hybrid mix of OLTP and DSS workloads presents its own challenges to the DBA. For example the DBA needs to create indexes to support those adhoc or analytical queries, but the presence of those indexes slows down the day to day OLTP Insert/Update/Delete statements.

OLTP databases are best served by having data in the row format for optimise DML activity – few rows many columns. A row format allows quick access to all of the columns in a record since all of the data for a given record are kept together either in memory in the database buffer cache or on disk storage.

Reporting and decision-making type queries require just the opposite data model – few columns but which span vast number of rows. A column format is ideal for analytics, as it allows for faster data retrieval when only a few columns are selected but the query accesses a large portion of the data set.

The Database In-Memory feature was introduced in Oracle 12c 12.1.0.2 (June 2014) and provides the ability to easily perform real-time data analysis together with real-time transaction processing without any application change or re-design. A subset of data Data can now be stored in an in-memory column format, optimized for analytical processing.

Oracle Database In-Memory feature provides the best of both worlds by allowing data to be simultaneously populated in both an in-memory row format (the database buffer cache) and a new in memory column format.

An area in the SGA is allocated for the In-Memory column store called In-Memory Area and here the data is stored in column format instead of the traditional row format.

Let us now look at a worked example of the In-Memory feature in action!

I created a SALES2 table based on the SALES table residing the example SH schema. The table was populated with about 11 million rows and there are no indexes on the table.

This is the test query I used:

SELECT CALENDAR_MONTH_DESC, 
       sum(quantity_sold), 
       lag(sum(quantity_sold),1) over (ORDER BY CALENDAR_MONTH_DESC) "Previous Month" 
FROM 
       SH.sales2,
       SH.TIMES
WHERE
       SH.times.TIME_ID = SH.sales2.TIME_ID AND
       SH.times.CALENDAR_YEAR <> '2000'
GROUP BY CALENDAR_MONTH_DESC
ORDER BY CALENDAR_MONTH_DESC ASC;

 

We need to first enable the In-Memory memory area and note that this portion of memory is taken from the SGA already allocated to the database.

When we start the database we can see a new In-Memory area of memory displayed.

SQL> alter system set inmemory_size=1024m scope=spfile;

System altered.


SQL> startup;
ORACLE instance started.

Total System Global Area 3221225472 bytes
Fixed Size                  2929552 bytes
Variable Size             603982960 bytes
Database Buffers         1526726656 bytes
Redo Buffers               13844480 bytes
In-Memory Area           1073741824 bytes
Database mounted.
Database opened.

We then use the INMEMORY clause to enable the SALES2 and TIMES tables to use the In-Memory column store.

The INMEMORY attribute can be
specified on a tablespace, table, (sub)partition, or materialized view.

SQL> alter table sh.times inmemory;

Table altered.


SQL> alter table sh.sales2 inmemory;

Table altered.

 

Note – at this stage since the tables have not been accessed, they have not been populated in the IM column store memory.

SQL> select segment_name, INMEMORY_SIZE , populate_status from v$im_segments;

no rows selected

ALTER TABLE customers INMEMORY PRIORITY CRITICAL;
Let us now execute the query – it takes .40 seconds to execute

CALENDAR SUM(QUANTITY_SOLD) Previous Month
-------- ------------------ --------------
2001-10              294444         246948
2001-11              268836         294444
2001-12              273708         268836
...
...

36 rows selected.

Elapsed: 00:00:00.40

Note the Explain Plan

SQL> select * from table(dbms_xplan.display_cursor());


----------------------------------------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                          | Name                      | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                   |                           |       |       |   770 (100)|          |
|   1 |  TEMP TABLE TRANSFORMATION         |                           |       |       |            |          |
|   2 |   LOAD AS SELECT                   |                           |       |       |            |          |
|   3 |    VECTOR GROUP BY                 |                           |    60 |  1200 |     5  (20)| 00:00:01 |
|   4 |     KEY VECTOR CREATE BUFFERED     | :KV0000                   |       |       |            |          |
|*  5 |      TABLE ACCESS INMEMORY FULL    | TIMES                     |  1461 | 29220 |     4   (0)| 00:00:01 |
|   6 |   WINDOW BUFFER                    |                           |    60 |  2520 |   765  (26)| 00:00:01 |
|   7 |    SORT GROUP BY                   |                           |    60 |  2520 |   765  (26)| 00:00:01 |
|*  8 |     HASH JOIN                      |                           |    60 |  2520 |   764  (26)| 00:00:01 |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
|   9 |      VIEW                          | VW_VT_0834CBC3            |    60 |  1320 |   762  (26)| 00:00:01 |
|  10 |       VECTOR GROUP BY              |                           |    60 |   660 |   762  (26)| 00:00:01 |
|  11 |        HASH GROUP BY               |                           |    60 |   660 |   762  (26)| 00:00:01 |
|  12 |         KEY VECTOR USE             | :KV0000                   |       |       |            |          |
|* 13 |          TABLE ACCESS INMEMORY FULL| SALES2                    |    11M|   115M|   758  (25)| 00:00:01 |
|  14 |      TABLE ACCESS FULL             | SYS_TEMP_0FD9D6600_5B3351 |    60 |  1200 |     2   (0)| 00:00:01 |
----------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------


PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
   5 - inmemory("TIMES"."CALENDAR_YEAR"<>2000)
       filter("TIMES"."CALENDAR_YEAR"<>2000)
   8 - access("ITEM_5"=INTERNAL_FUNCTION("C0") AND "ITEM_6"="C2")
  13 - inmemory(SYS_OP_KEY_VECTOR_FILTER("SALES2"."TIME_ID",:KV0000))
       filter(SYS_OP_KEY_VECTOR_FILTER("SALES2"."TIME_ID",:KV0000))

Note
-----
   - vector transformation used for this statement

The In-Memory column store has now been populated with the SALES2 and TIMES tables.

SQL> select segment_name, INMEMORY_SIZE , populate_status from v$im_segments;

SEGMENT_NAME
--------------------------------------------------------------------------------
INMEMORY_SIZE POPULATE_
------------- ---------
TIMES
      1179648 COMPLETED

SALES2
     91422720 COMPLETED

 

Disable In-Memory column store

We see that the same query now takes 2.85 seconds

SQL> alter session set inmemory_query=disable;

Session altered.


CALENDAR SUM(QUANTITY_SOLD) Previous Month
-------- ------------------ --------------
2001-10              294444         246948
2001-11              268836         294444
2001-12              273708         268836

…..
…….

36 rows selected.

Elapsed: 00:00:02.85

Note the difference in the Explain Plan with In-Memory option disabled

SQL> SQL> select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  2qvd8xzntr6fr, child number 1
-------------------------------------
SELECT CALENDAR_MONTH_DESC,        sum(quantity_sold),
lag(sum(quantity_sold),1) over (ORDER BY CALENDAR_MONTH_DESC) "Previous
Month" FROM        SH.sales2,        SH.TIMES WHERE
SH.times.TIME_ID = SH.sales2.TIME_ID AND        SH.times.CALENDAR_YEAR
<> '2000' GROUP BY CALENDAR_MONTH_DESC ORDER BY CALENDAR_MONTH_DESC ASC

Plan hash value: 1281295501

-------------------------------------------------------------------------------

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
| Id  | Operation            | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |        |       |       | 14659 (100)|          |
|   1 |  WINDOW BUFFER       |        |    60 |  1860 | 14659   (1)| 00:00:01 |
|   2 |   SORT GROUP BY      |        |    60 |  1860 | 14659   (1)| 00:00:01 |
|*  3 |    HASH JOIN         |        |   918K|    27M| 14638   (1)| 00:00:01 |
|*  4 |     TABLE ACCESS FULL| TIMES  |  1461 | 29220 |    18   (0)| 00:00:01 |
|   5 |     TABLE ACCESS FULL| SALES2 |   918K|  9870K| 14617   (1)| 00:00:01 |
-------------------------------------------------------------------------------

Oracle 12c New Feature IDENTITY Columns

$
0
0

In Oracle 12c when we create a table we can populate a column automatically via a system generated sequence by using the GENERATED AS IDENTITY clause in the CREATE TABLE statement.

We can use GENERATED AS IDENTITY with the ALWAYS, DEFAULT or DEFAULT ON NULL keywords and that will affect the way or when the identity column value is populated.

By default the GENERATED AS IDENTITY clause implicitly includes the ALWAYS keyword i.e GENERATED ALWAYS AS IDENTITY.

When the ALWAYS keyword is specified it is not possible to explicitly include values for the identity column in INSERT OR UPDATE SQL statements.

SQL> create table emp
  2  (emp_id NUMBER GENERATED ALWAYS AS IDENTITY, ename varchar2(10));

Table created.

SQL> desc emp
Name                                                              Null?    Type
----------------------------------------------------------------- -------- --------------------------------------------
EMP_ID                                                            NOT NULL NUMBER
ENAME                                                                      VARCHAR2(10)


SQL> alter table  emp
  2  add constraint pk_emp primary key (emp_id);

Table altered.


We cannot explicitly enter a value for the identity column EMP_ID as that is generated automatically.

SQL> insert into emp
  2  values
  3  (1,'Bob');
insert into emp
*
ERROR at line 1:
ORA-32795: cannot insert into a generated always identity column


SQL> insert into emp (ename)
  2  values
  3  ('Bob');

1 row created.

SQL> select * from emp;

    EMP_ID ENAME
---------- ----------
         1 Bob


Let us look at another example using this time the DEFAULT keyword


SQL> drop table emp;

Table dropped.

SQL> create table emp
  2  (emp_id NUMBER GENERATED BY DEFAULT AS IDENTITY, ename varchar2(10));

Table created.


Unlike the previous case we can specify a value for the identity column. The identity column is only automatically populated if we do not provide a value for the identity column.

SQL> insert into emp
  2  values
  3  (1,'Bob');

1 row created.

SQL> insert into emp
  2  (ename)
  3  values
  4  ('Tom');

1 row created.


SQL> select * from emp;

    EMP_ID ENAME
---------- ----------
         1 Bob
         2 Tom


SQL>  insert into emp
  2  (ename)
  3  values
  4  ('Fred');

1 row created.


SQL> select * from emp;

    EMP_ID ENAME
---------- ----------
         1 Bob
         2 Tom
         3 Fred


SQL>  insert into emp
  2   values
  3  (4,'Jim');

1 row created.


SQL> insert into emp
  2  (ename)
  3    values
  4   ('Fred');
insert into emp
*
ERROR at line 1:
ORA-00001: unique constraint (SH.PK_EMP) violated  - WHY???


SQL> insert into emp
  2  (ename)
  3  values
  4  ('Tony');

1 row created.

SQL>  select * from emp;

    EMP_ID ENAME
---------- ----------
         1 Bob
         2 Tom
         3 Fred
         4 Tony



Try and insert a null value

SQL>  insert into emp
  2  values
  3  (null,'Jim');
(null,'Jim')
*
ERROR at line 3:
ORA-01400: cannot insert NULL into ("SH"."EMP"."EMP_ID”)


BY DEFAULT ON NULL clause ensures that initially the identity column will only be populated automatically if no value is supplied for the column and also if a null value is provided unlike the previous example


SQL> drop table emp;

Table dropped.

SQL> create table emp
  2  (emp_id NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY, ename varchar2(10));

Table created.

SQL> insert into emp
  2  (ename)
  3  values
  4   ('Tom');

1 row created.


SQL>  insert into emp
  2  values
  3  (null,'Bob');

1 row created.

SQL> select * from emp;

    EMP_ID ENAME
---------- ----------
         1 Tom
         2 Bob



The sequence will have the prefix ISEQ$$ followed by the Object ID of the table.


SQL> select sequence_name from user_sequences;

SEQUENCE_NAME
------------------------------------------------------------------------------------------------------------------------
ISEQ$$_93421


There is a new view called *_TAB_IDENTITY_COLS and the *_TABLES view has a new column HAS_IDENTITY


SQL>  select table_name, column_name, generation_type,identity_options
2  from user_tab_identity_cols where sequence_name='ISEQ$$_93421';

TABLE_NAME
------------------------------------------------------------------------------------------------------------------------
COLUMN_NAME
------------------------------------------------------------------------------------------------------------------------
GENERATION
----------
IDENTITY_OPTIONS
------------------------------------------------------------------------------------------------------------------------
EMP
EMP_ID
BY DEFAULT
START WITH: 1, INCREMENT BY: 1, MAX_VALUE: 9999999999999999999999999999, MIN_VALUE: 1, CYCLE_FLAG: N, CACHE_SIZE: 20, OR
DER_FLAG: N


SQL> select has_identity from user_tables where table_name='EMP';

HAS
---
YES

Oracle 12c new feature APPROX_COUNT_DISTINCT and FETCH FIRST ROWS

$
0
0

If we are dealing with large amounts of data and have to use queries to find count of distinct values of some particular column or group of columns, the new 12.1.0.2 feature APPROX_COUNT_DISTINCT function can be significantly faster than using the traditional COUNT (DISTINCT expr) function with marginal differences in the exact result.

The examples below also show less optimizer cost overhead as well as temporary tablespace space usage as well.

We will also see how the traditional Top n queries we used to run has changed significantly in Oracle 12c with the FETCH FIRST n ROWS and OFFSET commands available in SQL*PLUS.

Let us run the query in the traditional way to determine the Top 5 products in terms of the number of individual customers.

Note the execution plan and the fact that sort operation has used some temporary space.

SQL> conn sh/sh
Connected.


SQL> select * from
  2  (select prod_id, count(distinct cust_id) from sales
  3  group by prod_id  order by 2 desc )
  4   where rownum < 6;


   PROD_ID COUNT(DISTINCTCUST_ID)
---------- ----------------------
        30                   6154
        48                   6010
        40                   5972
        31                   5586
       130                   5428


Execution Plan
----------------------------------------------------------
Plan hash value: 1492996759

---------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name      | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |           |     5 |   130 |       |  2654   (2)| 00:00:01 |       |       |
|*  1 |  COUNT STOPKEY            |           |       |       |       |            |          |       |       |
|   2 |   VIEW                    |           |    72 |  1872 |       |  2654   (2)| 00:00:01 |       |       |
|*  3 |    SORT ORDER BY STOPKEY  |           |    72 |  1224 |       |  2654   (2)| 00:00:01 |       |       |
|   4 |     HASH GROUP BY         |           |    72 |  1224 |       |  2654   (2)| 00:00:01 |       |       |
|   5 |      VIEW                 | VM_NWVW_1 |   359K|  5966K|       |  2637   (2)| 00:00:01 |       |       |
|   6 |       HASH GROUP BY       |           |   359K|  3158K|    17M|  2637   (2)| 00:00:01 |       |       |
|   7 |        PARTITION RANGE ALL|           |   918K|  8075K|       |   514   (1)| 00:00:01 |     1 |    28 |
|   8 |         TABLE ACCESS FULL | SALES     |   918K|  8075K|       |   514   (1)| 00:00:01 |     1 |    28 |

Now we run the same query using the 12c new features - APPROX_COUNT_DISTINCT and FETCH FIRST n ROWS

Note the difference in the execution plan and optimizer cost. We see two new operations - WINDOW SORT PUSHED RANK and HASH GROUP BY APPROX

The results are almost 98% accurate with the APPROX_COUNT_DISTINCT function in use.


SQL> select prod_id, APPROX_COUNT_DISTINCT (cust_id)  from sales
  2  group by prod_id  order by 2 desc
  3  FETCH FIRST 5 ROWS ONLY;

   PROD_ID APPROX_COUNT_DISTINCT(CUST_ID)
---------- ------------------------------
        30                           6134
        48                           5883
        40                           5858
        31                           5463
       130                           5336


Execution Plan
----------------------------------------------------------
Plan hash value: 723256909

---------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |       |     5 |   260 |   559   (9)| 00:00:01 |       |       |
|   1 |  SORT ORDER BY            |       |     5 |   260 |   559   (9)| 00:00:01 |       |       |
|*  2 |   VIEW                    |       |     5 |   260 |   558   (9)| 00:00:01 |       |       |
|*  3 |    WINDOW SORT PUSHED RANK|       |    72 |   648 |   558   (9)| 00:00:01 |       |       |
|   4 |     HASH GROUP BY APPROX  |       |    72 |   648 |   558   (9)| 00:00:01 |       |       |
|   5 |      PARTITION RANGE ALL  |       |   918K|  8075K|   514   (1)| 00:00:01 |     1 |    28 |
|   6 |       TABLE ACCESS FULL   | SALES |   918K|  8075K|   514   (1)| 00:00:01 |     1 |    28 |


Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   3 - filter(ROW_NUMBER() OVER ( ORDER BY APPROX_COUNT_DISTINCT("CUST_ID") DESC )<=5)

We can also use the OFFSET clause to now do some pagination and obtain the next 5 rows (after the top 5)

SQL> select prod_id, APPROX_COUNT_DISTINCT (cust_id)  from sales
  2   group by prod_id  order by 2 desc
  3  OFFSET 5 ROWS FETCH NEXT 5 ROWS ONLY;

   PROD_ID APPROX_COUNT_DISTINCT(CUST_ID)
---------- ------------------------------
        33                           5278
       120                           5135
       128                           5135
       133                           5114
        23                           5104

We can also use the PERCENT ROWS clause to return the top percentage of rows we desire

SQL>  select prod_id, count(distinct cust_id) from sales
  2   group by prod_id  order by 2 desc
  3  FETCH FIRST 10 PERCENT ROWS ONLY;

   PROD_ID COUNT(DISTINCTCUST_ID)
---------- ----------------------
        30                   6154
        48                   6010
        40                   5972
        31                   5586
       130                   5428
        33                   5389
       120                   5224
       133                   5201

8 rows selected.

When we ran all these queries these were the runtime statistics – almost the same in every case.

Statistics
----------------------------------------------------------
          7  recursive calls
          0  db block gets
       1623  consistent gets
       1607  physical reads
          0  redo size
        732  bytes sent via SQL*Net to client
        552  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed

Since the database is 12.1.0.2 and we have enabled IN MEMORY option in the database let us what happens to the same query when we use the In Memory column store.

Note the difference in the optimizer cost, execution plan and 0 physical reads – In Memory option in operation which requires no change in the application or query!

SQL> alter table sales inmemory;

Table altered.

SQL> select prod_id, APPROX_COUNT_DISTINCT (cust_id)  from sales
  2   group by prod_id  order by 2 desc
  3  FETCH FIRST 5 ROWS ONLY;

   PROD_ID APPROX_COUNT_DISTINCT(CUST_ID)
---------- ------------------------------
        30                           6134
        48                           5883
        40                           5858
        31                           5463
       130                           5336



Execution Plan
----------------------------------------------------------
Plan hash value: 723256909

---------------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name  | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |       |     5 |   260 |   154  (33)| 00:00:01 |       |       |
|   1 |  SORT ORDER BY                  |       |     5 |   260 |   154  (33)| 00:00:01 |       |       |
|*  2 |   VIEW                          |       |     5 |   260 |   153  (33)| 00:00:01 |       |       |
|*  3 |    WINDOW SORT PUSHED RANK      |       |    72 |   648 |   153  (33)| 00:00:01 |       |       |
|   4 |     HASH GROUP BY APPROX        |       |    72 |   648 |   153  (33)| 00:00:01 |       |       |
|   5 |      PARTITION RANGE ALL        |       |   918K|  8075K|   110   (6)| 00:00:01 |     1 |    28 |
|   6 |       TABLE ACCESS INMEMORY FULL| SALES |   918K|  8075K|   110   (6)| 00:00:01 |     1 |    28 |
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("from$_subquery$_002"."rowlimit_$$_rownumber"<=5)
   3 - filter(ROW_NUMBER() OVER ( ORDER BY APPROX_COUNT_DISTINCT("CUST_ID") DESC )<=5)


Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
         32  consistent gets
          0  physical reads
          0  redo size
        732  bytes sent via SQL*Net to client
        552  bytes received via SQL*Net from client
          2  SQL*Net roundtrips to/from client
          2  sorts (memory)
          0  sorts (disk)
          5  rows processed



Minimal downtime database upgrade from Oracle 10g to Oracle 12c

$
0
0

This note describes the procedure of migrating as well as upgrading an Oracle 10g Release 2 database to Oracle 12c (12.1.0.2) using a combination of RMAN Incremental backups and the new Oracle 12c command line parallel upgrade utility.

This method minimises the outage required for the migration as well as database upgrade and the outage is limited to the time it takes to backup the last set of archive log files generated since the last Level 1 incremental backup and the time taken to run the catupgrd.sql upgrade script (which can now be run from the command line in parallel).

In this example we are moving a database from one Linux host 5.3 to another Linux 5.7 host, moving from ASM to non-ASM as well as upgrading the database.

Let us take a look at the steps involved.

Copy the new 12c Pre-Upgrade scripts to the Oracle 10g database server and execute it in the Oracle 10.2.0.5 database

[oracle@cls18 ~]$ mkdir preupgrade
[oracle@cls18 ~]$ cd preupgrade/
[oracle@cls18 preupgrade]$ scp -rp oracle@198.168.82.211:/app/oracle/product/12.1.0.2/dbhome_1/preupgrade/* .
oracle@198.168.82.211's password:
emremove.sql	100%   19KB  19.2KB/s   00:00
preupgrd.sql 	100%   14KB  13.8KB/s   00:00
utluppkg.sql	100%  484KB 483.9KB/s   00:00


[oracle@cls18 preupgrade]$ sqlplus sys as sysdba

SQL*Plus: Release 10.2.0.5.0 - Production on Mon Dec 1 09:25:44 2014

Copyright (c) 1982, 2010, Oracle.  All Rights Reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> @preupgrd.sql

Loading Pre-Upgrade Package...


***************************************************************************
Executing Pre-Upgrade Checks in DEV18...
***************************************************************************


      ************************************************************

                  ====>> ERRORS FOUND for DEV18 <<====

 The following are *** ERROR LEVEL CONDITIONS *** that must be addressed
                    prior to attempting your upgrade.
            Failure to do so will result in a failed upgrade.


 1) Check Tag:    COMPATIBLE_PARAMETER
    Check Summary: Verify compatible parameter value is valid
    Fixup Summary:
     ""compatible" parameter must be increased manually prior to upgrade."
    +++ Source Database Manual Action Required +++


 2) Check Tag:    PURGE_RECYCLEBIN
    Check Summary: Check that recycle bin is empty prior to upgrade
    Fixup Summary:
     "The recycle bin will be purged."

           You MUST resolve the above errors prior to upgrade

      ************************************************************

      ************************************************************

              ====>> PRE-UPGRADE RESULTS for DEV18 <<====

ACTIONS REQUIRED:

1. Review results of the pre-upgrade checks:
 /app/DEV18/product/product/10.2.0/db_1/cfgtoollogs/DEV18/preupgrade/preupgrade.log

2. Execute in the SOURCE environment BEFORE upgrade:
 /app/DEV18/product/product/10.2.0/db_1/cfgtoollogs/DEV18/preupgrade/preupgrade_fixups.sql

3. Execute in the NEW environment AFTER upgrade:
 /app/DEV18/product/product/10.2.0/db_1/cfgtoollogs/DEV18/preupgrade/postupgrade_fixups.sql

      ************************************************************

***************************************************************************
Pre-Upgrade Checks in DEV18 Completed.
***************************************************************************

***************************************************************************
***************************************************************************
SQL>

Review the preupgrade.log file


[oracle@cls18 preupgrade]$ cat /app/DEV18/product/product/10.2.0/db_1/cfgtoollogs/DEV18/preupgrade/preupgrade.log
Oracle Database Pre-Upgrade Information Tool 12-01-2014 09:26:09
Script Version: 12.1.0.2.0 Build: 006
**********************************************************************
   Database Name:  DEV18
  Container Name:  Not Applicable in Pre-12.1 database
    Container ID:  Not Applicable in Pre-12.1 database
         Version:  10.2.0.5.0
      Compatible:  10.2.0.3.0
       Blocksize:  8192
        Platform:  Linux x86 64-bit
   Timezone file:  V14
**********************************************************************
                           [Update parameters]

--> If Target Oracle is 64-bit, refer here for Update Parameters:
WARNING: --> "sga_target" needs to be increased to at least 650117120
**********************************************************************
**********************************************************************
                          [Renamed Parameters]
                     [No Renamed Parameters in use]
**********************************************************************
**********************************************************************
                    [Obsolete/Deprecated Parameters]
--> background_dump_dest         11.1       DESUPPORTED  replaced by  "diagnostic_dest"
--> user_dump_dest               11.1       DESUPPORTED  replaced by  "diagnostic_dest"

        [Changes required in Oracle Database init.ora or spfile]

**********************************************************************
                            [Component List]
**********************************************************************
--> Oracle Catalog Views                   [upgrade]  VALID
--> Oracle Packages and Types              [upgrade]  VALID
--> JServer JAVA Virtual Machine           [upgrade]  VALID
--> Oracle XDK for Java                    [upgrade]  VALID
--> Real Application Clusters              [upgrade]  VALID
--> Oracle Workspace Manager               [upgrade]  VALID
--> OLAP Analytic Workspace                [upgrade]  VALID
--> Oracle Enterprise Manager Repository   [upgrade]  VALID
--> Oracle Text                            [upgrade]  VALID
--> Oracle XML Database                    [upgrade]  VALID
--> Oracle Java Packages                   [upgrade]  VALID
--> Oracle Multimedia                      [upgrade]  VALID
--> Oracle Spatial                         [upgrade]  VALID
--> Data Mining                            [upgrade]  VALID
--> Expression Filter                      [upgrade]  VALID
--> Rule Manager                           [upgrade]  VALID
--> Oracle OLAP API                        [upgrade]  VALID
**********************************************************************


                              [Tablespaces]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
     minimum required size: 1259 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
     minimum required size: 400 MB
--> SYSAUX tablespace is adequate for the upgrade.
     minimum required size: 1552 MB
--> TEMP tablespace is adequate for the upgrade.
     minimum required size: 60 MB

                      [No adjustments recommended]

**********************************************************************
**********************************************************************
                          [Pre-Upgrade Checks]
**********************************************************************
ERROR: --> Compatible set too low

     "compatible" currently set at 10.2.0.3.0 and must
     be set to at least 11.0.0 prior to upgrading the database.
     Do not make this change until you are ready to upgrade
     because a downgrade back to 10.2 is not possible once compatible
     has been raised.

     Update your init.ora or spfile to make this change.

WARNING: --> "ORACLE_OCM" user found in database

     This is an internal account used by Oracle Configuration Manager.
     Please drop this user prior to upgrading.

WARNING: --> Enterprise Manager Database Control repository found in the database

     In Oracle Database 12c, Database Control is removed during
     the upgrade. To save time during the Upgrade, this action
     can be done prior to upgrading using the following steps after
     copying rdbms/admin/emremove.sql from the new Oracle home
   - Stop EM Database Control:
    $> emctl stop dbconsole

   - Connect to the Database using the SYS account AS SYSDBA:

   SET ECHO ON;
   SET SERVEROUTPUT ON;
   @emremove.sql
     Without the set echo and serveroutput commands you will not
     be able to follow the progress of the script.

WARNING: --> "DMSYS" schema exists in the database

     The DMSYS schema (Oracle Data Mining) will be removed
     from the database during the database upgrade.
     All data in DMSYS will be preserved under the SYS schema.
     Refer to the Oracle Data Mining User's Guide for details.

WARNING: --> Database contains INVALID objects prior to upgrade

     The list of invalid SYS/SYSTEM objects was written to
     registry$sys_inv_objs.
     The list of non-SYS/SYSTEM objects was written to
     registry$nonsys_inv_objs unless there were over 5000.
     Use utluiobj.sql after the upgrade to identify any new invalid
     objects due to the upgrade.

INFORMATION: --> OLAP Catalog(AMD) exists in database

     Starting with Oracle Database 12c, OLAP Catalog component is desupported.
     If you are not using the OLAP Catalog component and want
     to remove it, then execute the
     ORACLE_HOME/olap/admin/catnoamd.sql script before or
     after the upgrade.

INFORMATION: --> Older Timezone in use

     Database is using a time zone file older than version 18.
     After the upgrade, it is recommended that DBMS_DST package
     be used to upgrade the 10.2.0.5.0 database time zone version
     to the latest version which comes with the new release.
     Please refer to My Oracle Support note number 977512.1 for details.

ERROR: --> RECYCLE_BIN not empty.
     Your recycle bin contains 56 object(s).
     It is REQUIRED that the recycle bin is empty prior to upgrading.
     Immediately before performing the upgrade, execute the following
     command:
       EXECUTE dbms_preup.purge_recyclebin_fixup;

WARNING: --> Existing schemas with network ACLs exist

     Database contains schemas with objects dependent on network packages.
     Refer to the Upgrade Guide for instructions to configure Network ACLs.
     USER UTILITY has dependent objects.
     USER DBAMON has dependent objects.
     USER ORACLE_OCM has dependent objects.

INFORMATION: --> There are existing Oracle components that will NOT be
     upgraded by the database upgrade script.  Typically, such components
     have their own upgrade scripts, are deprecated, or obsolete.
     Those components are:  OLAP Catalog


**********************************************************************
                      [Pre-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ********* Dictionary Statistics *********
                        *****************************************

Please gather dictionary statistics 24 hours prior to
upgrading the database.
To gather dictionary statistics execute the following command
while connected as SYSDBA:
    EXECUTE dbms_stats.gather_dictionary_stats;

^^^ MANUAL ACTION SUGGESTED ^^^


**********************************************************************
                     [Post-Upgrade Recommendations]
**********************************************************************

                        *****************************************
                        ******** Fixed Object Statistics ********
                        *****************************************

Please create stats on fixed objects two weeks
after the upgrade using the command:
   EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

^^^ MANUAL ACTION SUGGESTED ^^^

**********************************************************************
                   ************  Summary  ************

 2 ERRORS exist that must be addressed prior to performing your upgrade.
 5 WARNINGS that Oracle suggests are addressed to improve database performance.
 3 INFORMATIONAL messages that should be reviewed prior to your upgrade.

 After your database is upgraded and open in normal mode you must run
 rdbms/admin/catuppst.sql which executes several required tasks and completes
 the upgrade process.

 You should follow that with the execution of rdbms/admin/utlrp.sql, and a
 comparison of invalid objects before and after the upgrade using
 rdbms/admin/utluiobj.sql

 If needed you may want to upgrade your timezone data using the process
 described in My Oracle Support note 1509653.1
                   ***********************************

In the Oracle 10g database execute the Pre-Upgrade fixup scripts

SQL>   EXECUTE dbms_preup.purge_recyclebin_fixup;

PL/SQL procedure successfully completed.

SQL> EXECUTE dbms_stats.gather_dictionary_stats;

PL/SQL procedure successfully completed.


SQL> SET ECHO ON;
SQL> SET SERVEROUTPUT ON;
SQL> @emremove.sql

.....

Dropping synonym : MGMT$TARGET_PROPERTIES ...
Dropping synonym : MGMT$TARGET_TYPE ...
Finished phase 5
Starting phase 6 : Dropping Oracle Enterprise Manager related other roles ...
Finished phase 6
The Oracle Enterprise Manager related schemas and objects are dropped.
Do the manual steps to studown the DB Control if not done before running this
script and then delete the DB Control configuration files

PL/SQL procedure successfully completed.

Take a level 0 RMAN Incremental backup

RMAN> run {
2> allocate channel c1 type disk;
3> backup incremental level 0 database format '/app/DEV18/oradump/bkp_lev0.%U';
4> release channel c1;
5> }

released channel: ORA_SBT_TAPE_1
released channel: ORA_SBT_TAPE_2
released channel: ORA_SBT_TAPE_3
released channel: ORA_SBT_TAPE_4
released channel: ORA_SBT_TAPE_5
allocated channel: c1
channel c1: sid=266 instance=DEV181 devtype=DISK

Starting backup at 01-DEC-14
channel c1: starting incremental level 0 datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00010 name=+DEV18_DATA_01/DEV18/datafile/utility_data.266.798291365
input datafile fno=00003 name=+DEV18_DATA_01/DEV18/datafile/sysaux.257.747180651
...
...

input datafile fno=00015 name=+DEV18_DATA_01/DEV18/datafile/rpo_data.276.798291365
input datafile fno=00020 name=+DEV18_DATA_01/DEV18/datafile/dw_data.282.821809927
channel c1: starting piece 1 at 01-DEC-14
channel c1: finished piece 1 at 01-DEC-14
piece handle=/app/DEV18/oradump/bkp_lev0.8ppp2gus_1_1 tag=TAG20141201T094147 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:25
Finished backup at 01-DEC-14

Starting Control File and SPFILE Autobackup at 01-DEC-14
piece handle=/app/DEV18/product/product/10.2.0/db_1/dbs/c-1508569629-20141201-00 comment=NONE
Finished Control File and SPFILE Autobackup at 01-DEC-14

released channel: c1

Copy the password file and init.ora from 10g source to Oracle 12c target database environment

oracle@cls18 dbs]$ scp -rp orapwDEV181 oracle@198.168.82.211:/app/oracle/product/12.1.0/dbhome_1/dbs
oracle@198.168.82.211's password:
orapwDEV181  100% 1536     1.5KB/s   00:00

 [oracle@cls18 dbs]$ scp -rp initDEV181.ora oracle@198.168.82.211:/app/oracle/product/12.1.0/dbhome_1/dbs
oracle@198.168.82.211's password:
initDEV181.ora            

Edit the init.ora parameter file in the 12c database environment and make changes like ….

remove deprecated 10g parameters like background and core dumps

change compatible

add diagnostic_dest

change control file location as in this example we are moving from ASM source on 10g to non-ASM target in 12c

add db_file_name_convert and log_file_name_convert

In the Oracle 12c environment start the instance in NOMOUNT state

[oracle@vosap02 dbs]$ export ORACLE_SID=DEV181
[oracle@vosap02 dbs]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Sun Nov 30 23:08:13 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:
Connected to an idle instance.

SQL> startup nomount;
ORACLE instance started.

Total System Global Area  880803840 bytes
Fixed Size                  2930416 bytes
Variable Size             469764368 bytes
Database Buffers          402653184 bytes
Redo Buffers                5455872 bytes

Restore control file backup


[oracle@vosap02 backup]$ pwd
/app/oracle/oradata/DEV181/backup
[oracle@vosap02 backup]$ ls
bkp_lev0.8ppp2gus_1_1  c-1508569629-20141201-00


[oracle@vosap02 backup]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Nov 30 23:14:52 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DEV18 (not mounted)

RMAN> restore controlfile from '/app/oracle/oradata/DEV181/backup/c-1508569629-20141201-00';

Starting restore at 30-NOV-14
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=12 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/app/oracle/oradata/DEV181/control01.ctl
output file name=/app/oracle/oradata/DEV181/control02.ctl
output file name=/app/oracle/oradata/DEV181/control03.ctl
Finished restore at 30-NOV-14


RMAN> alter database mount;

Statement processed
released channel: ORA_DISK_1




RMAN> catalog start with '/app/oracle/oradata/DEV181/backup';

searching for all files that match the pattern /app/oracle/oradata/DEV181/backup

List of Files Unknown to the Database
=====================================
File Name: /app/oracle/oradata/DEV181/backup/bkp_lev0.8ppp2gus_1_1
File Name: /app/oracle/oradata/DEV181/backup/c-1508569629-20141201-00

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /app/oracle/oradata/DEV181/backup/bkp_lev0.8ppp2gus_1_1
File Name: /app/oracle/oradata/DEV181/backup/c-1508569629-20141201-00Regards,

Restore Level 0 backup

In this example we are changing the data file names at the database level via the SET NEWNAME FOR DATABASE command.

We can also change the data file names at the individual data file level using the SET NEWNAME FOR DATAFILE command

RMAN>  run {
2> allocate channel c1 type disk;
3> allocate channel c2 type disk;
4> SET NEWNAME FOR DATABASE   TO  '/app/oracle/oradata/DEV181/%b';
5> SET NEWNAME FOR tempfile  1 TO  '/app/oracle/oradata/DEV181/%b';
6> restore database;
7> switch datafile all;
8> switch tempfile all;
9> release channel c1;
10> release channel c2;
11> }

allocated channel: c1
channel c1: SID=12 device type=DISK

allocated channel: c2
channel c2: SID=249 device type=DISK

executing command: SET NEWNAME

executing command: SET NEWNAME

Starting restore at 30-NOV-14

channel c1: starting datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
channel c1: restoring datafile 00001 to /app/oracle/oradata/DEV181/system.256.747180649
channel c1: restoring datafile 00002 to /app/oracle/oradata/DEV181/undotbs1.258.747180651
channel c1: restoring datafile 00003 to /app/oracle/oradata/DEV181/sysaux.257.747180651
...
...

channel c1: reading from backup piece /app/DEV18/oradump/bkp_lev0.8ppp2gus_1_1
channel c1: errors found reading piece handle=/app/DEV18/oradump/bkp_lev0.8ppp2gus_1_1
channel c1: failover to piece handle=/app/oracle/oradata/DEV181/backup/bkp_lev0.8ppp2gus_1_1 tag=TAG20141201T094147
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:15
Finished restore at 30-NOV-14

datafile 1 switched to datafile copy
input datafile copy RECID=18 STAMP=865034590 file name=/app/oracle/oradata/DEV181/system.256.747180649
datafile 2 switched to datafile copy
input datafile copy RECID=19 STAMP=865034590 file name=/app/oracle/oradata/DEV181/undotbs1.258.747180651
datafile 3 switched to datafile copy
input datafile copy RECID=20 STAMP=865034591 file name=/app/oracle/oradata/DEV181/sysaux.257.747180651
datafile 4 switched to datafile copy
input datafile copy RECID=21 STAMP=865034591 file name=/app/oracle/oradata/DEV181/users.259.747180651
...
...
...
input datafile copy RECID=33 STAMP=865034591 file name=/app/oracle/oradata/DEV181/dw_data.282.821809927
datafile 21 switched to datafile copy
input datafile copy RECID=34 STAMP=865034591 file name=/app/oracle/oradata/DEV181/gg_data.283.838134609

renamed tempfile 1 to /app/oracle/oradata/DEV181/temp.264.747180711 in control file
renamed tempfile 2 to /app/oracle/oradata/DEV181/temp_others.281.798291417 in control file

released channel: c1

released channel: c2

Take a level 0 RMAN Incremental backup

At this stage there is no application outage and to simulate that we make some changes in the source 10g database


SQL> create table system.myobjects_1
  2  tablespace users as select * from dba_objects;

Table created.

[oracle@cls18 oradump]$ rman target /

Recovery Manager: Release 10.2.0.5.0 – Production on Mon Dec 1 12:29:49 2014

Copyright (c) 1982, 2007, Oracle. All rights reserved.

connected to target database: DEV18 (DBID=1508569629)

RMAN> run {
allocate channel c1 type disk;
backup incremental level 1 database format ‘/app/DEV18/oradump/bkp_lev1.%U’;
release channel c1;
}
2> 3> 4> 5>
using target database control file instead of recovery catalog
allocated channel: c1
channel c1: sid=238 instance=DEV181 devtype=DISK

Starting backup at 01-DEC-14
channel c1: starting incremental level 1 datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00010 name=+DEV18_DATA_01/DEV18/datafile/utility_data.266.798291365
input datafile fno=00003 name=+DEV18_DATA_01/DEV18/datafile/sysaux.257.747180651

input datafile fno=00020 name=+DEV18_DATA_01/DEV18/datafile/dw_data.282.821809927
channel c1: starting piece 1 at 01-DEC-14
channel c1: finished piece 1 at 01-DEC-14
piece handle=/app/DEV18/oradump/bkp_lev1.8rpp2qri_1_1 tag=TAG20141201T123040 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:07
Finished backup at 01-DEC-14

Starting Control File and SPFILE Autobackup at 01-DEC-14
piece handle=/app/DEV18/product/product/10.2.0/db_1/dbs/c-1508569629-20141201-01 comment=NONE
Finished Control File and SPFILE Autobackup at 01-DEC-14

released channel: c1

Copy the level 1 incremental backup to target and register

[oracle@cls18 oradump]$ scp -rp bkp_lev1.8rpp2qri_1_1 oracle@198.168.82.211:/app/oracle/oradata/DEV181/backup                   oracle@198.168.82.211's password:
bkp_lev1.8rpp2qri_1_1   100%  108MB  18.0MB/s   00:06



RMAN> catalog start with '/app/oracle/oradata/DEV181/backup';

searching for all files that match the pattern /app/oracle/oradata/DEV181/backup

List of Files Unknown to the Database
=====================================
File Name: /app/oracle/oradata/DEV181/backup/bkp_lev1.8rpp2qri_1_1

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /app/oracle/oradata/DEV181/backup/bkp_lev1.8rpp2qri_1_1RMAN>

Recover the database – note it will fail when it tries to apply a non-existent archive log file

RMAN> run {
2> allocate channel c1 type disk;
3> recover database;
4> }

allocated channel: c1
channel c1: SID=12 device type=DISK

Starting recover at 30-NOV-14
channel c1: starting incremental datafile backup set restore
channel c1: specifying datafile(s) to restore from backup set
destination for restore of datafile 00001: /app/oracle/oradata/DEV181/system.256.747180649
destination for restore of datafile 00002: /app/oracle/oradata/DEV181/undotbs1.258.747180651
destination for restore of datafile 00003: /app/oracle/oradata/DEV181/sysaux.257.747180651…
...
...
...
destination for restore of datafile 00021: /app/oracle/oradata/DEV181/gg_data.283.838134609
channel c1: reading from backup piece /app/oracle/oradata/DEV181/backup/bkp_lev1.8rpp2qri_1_1
channel c1: piece handle=/app/oracle/oradata/DEV181/backup/bkp_lev1.8rpp2qri_1_1 tag=TAG20141201T123040
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:03

starting media recovery

unable to find archived log
archived log thread=1 sequence=53490
released channel: c1
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 11/30/2014 23:36:09
RMAN-06054: media recovery requesting unknown archived log for thread 1 with sequence 53490 and starting SCN of 5982863583825

RMAN>

Let us make some more changes in the database – perform a final archive logfile switch to ensure all changes written to disk from log buffer cache

OUTAGE STARTS NOW!

SQL> create table system.myobjects_2
  2  tablespace users as select * from dba_objects;

Table created.

SQL> alter system switch logfile;

System altered

Take a backup of the archive log files generated since the last Level 1 incremental backup using the command

BACKUP ARCHIVELOG ALL NOT BACKED UP 1 TIMES

Copy the archivelog backup to target and register


[oracle@cls18 oradump]$ scp -rp bkp_arch.8tpp2rb6_1_1 oracle@198.168.82.211:/app/oracle/oradata/DEV181/backup
oracle@198.168.82.211's password:
bkp_arch.8tpp2rb6_1_1   100% 2101MB  19.3MB/s   01:49


RMAN> catalog start with '/app/oracle/oradata/DEV181/backup';

searching for all files that match the pattern /app/oracle/oradata/DEV181/backup

List of Files Unknown to the Database
=====================================
File Name: /app/oracle/oradata/DEV181/backup/bkp_arch.8tpp2rb6_1_1

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /app/oracle/oradata/DEV181/backup/bkp_arch.8tpp2rb6_1_1

Run a LIST BACKUP OF ARCHIVELOG command and note the last archive log file which has been backed up


RMAN> list backup of archivelog all;

...
...
...

  1    53495   5982863638922 01-DEC-14 5982863650457 01-DEC-14
  1    53496   5982863650457 01-DEC-14 5982863662947 01-DEC-14
  1    53497   5982863662947 01-DEC-14 5982863673387 01-DEC-14
  1    53498   5982863673387 01-DEC-14 5982863674424 01-DEC-14
  1    53499   5982863674424 01-DEC-14 5982863674684 01-DEC-14

Recover the database until the sequence number noted above + 1 (53499+1)


oracle@vosap02 backup]$ rman target /

Recovery Manager: Release 12.1.0.2.0 - Production on Sun Nov 30 23:44:43 2014

Copyright (c) 1982, 2014, Oracle and/or its affiliates.  All rights reserved.

connected to target database: DEV18 (DBID=1508569629, not open)

RMAN> run {
2> allocate channel c1 type disk;
3> set until sequence 53500 thread 1;
4> recover database;
5> }

using target database control file instead of recovery catalog
allocated channel: c1
channel c1: SID=12 device type=DISK

executing command: SET until clause

Starting recover at 30-NOV-14

starting media recovery

channel c1: starting archived log restore to default destination
channel c1: restoring archived log
archived log thread=1 sequence=53490
channel c1: restoring archived log
...
...

channel c1: restoring archived log
archived log thread=1 sequence=53498
channel c1: restoring archived log
archived log thread=1 sequence=53499
channel c1: reading from backup piece /app/oracle/oradata/DEV181/backup/bkp_arch.8tpp2rb6_1_1
channel c1: piece handle=/app/oracle/oradata/DEV181/backup/bkp_arch.8tpp2rb6_1_1 tag=TAG20141201T123859
channel c1: restored backup piece 1
channel c1: restore complete, elapsed time: 00:00:03
archived log file name=/app/oracle/oradata/DEV181/arch/1_53490_747180704.log thread=1 sequence=53490
archived log file name=/app/oracle/oradata/DEV181/arch/1_53491_747180704.log thread=1 sequence=53491
archived log file name=/app/oracle/oradata/DEV181/arch/1_53492_747180704.log thread=1 sequence=53492
archived log file name=/app/oracle/oradata/DEV181/arch/1_53493_747180704.log thread=1 sequence=53493
archived log file name=/app/oracle/oradata/DEV181/arch/1_53494_747180704.log thread=1 sequence=53494
archived log file name=/app/oracle/oradata/DEV181/arch/1_53495_747180704.log thread=1 sequence=53495
archived log file name=/app/oracle/oradata/DEV181/arch/1_53496_747180704.log thread=1 sequence=53496
archived log file name=/app/oracle/oradata/DEV181/arch/1_53497_747180704.log thread=1 sequence=53497
archived log file name=/app/oracle/oradata/DEV181/arch/1_53498_747180704.log thread=1 sequence=53498
archived log file name=/app/oracle/oradata/DEV181/arch/1_53499_747180704.log thread=1 sequence=53499
media recovery complete, elapsed time: 00:00:02
Finished recover at 30-NOV-14
released channel: c1

Open the database with RESETLOGS UPGRADE

Ignore the error related to block change tracking – we will disable that.


RMAN> alter database open resetlogs upgrade;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of sql statement command at 11/30/2014 23:47:30
ORA-19751: could not create the change tracking file
ORA-19750: change tracking file: '+DEV18_DATA_01/DEV18/changetracking/ctf.278.864472519'
ORA-17502: ksfdcre:1 Failed to create file +DEV18_DATA_01/DEV18/changetracking/ctf.278.864472519
ORA-17501: logical block size 4294967295 is invalid
ORA-29701: unable to connect to Cluster Synchronization Service
ORA-17503: ksfdopn:2 Failed to open file +DEV18_DATA_01/DEV18/changetracking/ctf.278.864472519
ORA-15001: diskgroup "DEV18_DATA_01" does not exist or


SQL> alter database disable block change tracking;

Database altered.

Shut down the database and open it in the Oracle 12c environment in STARTUP UPGRADE mode

[oracle@vosap02 backup]$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Mon Dec 1 02:08:11 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> shutdown immediate;
ORA-01109: database not open


Database dismounted.
ORACLE instance shut down.


SQL> startup upgrade;
ORACLE instance started.

Total System Global Area  880803840 bytes
Fixed Size                  2930416 bytes
Variable Size             469764368 bytes
Database Buffers          402653184 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.

Execute the catctl.pl perl script which will call the catupgrd.sql in parallel mode


[oracle@vosap02 oracle]$ cd $ORACLE_HOME/rdbms/admin


 $ORACLE_HOME/perl/bin/perl catctl.pl -n 4 -l /app/oracle catupgrd.sql

[oracle@vosap02 admin]$ $ORACLE_HOME/perl/bin/perl catctl.pl -n 4 -l /app/oracle catupgrd.sql

Argument list for [catctl.pl]
SQL Process Count     n = 4
SQL PDB Process Count N = 0
Input Directory       d = 0
Phase Logging Table   t = 0
Log Dir               l = /app/oracle
Script                s = 0
Serial Run            S = 0
Upgrade Mode active   M = 0
Start Phase           p = 0
End Phase             P = 0
Log Id                i = 0
Run in                c = 0
Do not run in         C = 0
Echo OFF              e = 1
No Post Upgrade       x = 0
Reverse Order         r = 0
Open Mode Normal      o = 0
Debug catcon.pm       z = 0
Debug catctl.pl       Z = 0
Display Phases        y = 0
Child Process         I = 0

catctl.pl version: 12.1.0.2.0
Oracle Base           = /app/oracle

Analyzing file catupgrd.sql
Log files in /app/oracle
catcon: ALL catcon-related output will be written to /app/oracle/catupgrd_catcon_26248.lst
catcon: See /app/oracle/catupgrd*.log files for output generated by scripts
catcon: See /app/oracle/catupgrd_*.lst files for spool files, if any
Number of Cpus        = 2
SQL Process Count     = 4

------------------------------------------------------
Phases [0-73]
Serial   Phase #: 0 Files: 1     Time: 138s
Serial   Phase #: 1 Files: 5     Time: 31s
Restart  Phase #: 2 Files: 1     Time: 0s
Parallel Phase #: 3 Files: 18    Time: 9s
Restart  Phase #: 4 Files: 1     Time: 0s
Serial   Phase #: 5 Files: 5     Time: 14s
Serial   Phase #: 6 Files: 1     Time: 10s
Serial   Phase #: 7 Files: 4     Time: 5s
Restart  Phase #: 8 Files: 1     Time: 1s
Parallel Phase #: 9 Files: 62    Time: 26s
Restart  Phase #:10 Files: 1     Time: 0s
Serial   Phase #:11 Files: 1     Time: 10s
Restart  Phase #:12 Files: 1     Time: 0s
Parallel Phase #:13 Files: 91    Time: 9s
Restart  Phase #:14 Files: 1     Time: 0s
Parallel Phase #:15 Files: 111   Time: 14s
Restart  Phase #:16 Files: 1     Time: 0s
Serial   Phase #:17 Files: 3     Time: 0s
Restart  Phase #:18 Files: 1     Time: 0s
Parallel Phase #:19 Files: 32    Time: 19s
Restart  Phase #:20 Files: 1     Time: 0s
Serial   Phase #:21 Files: 3     Time: 5s
Restart  Phase #:22 Files: 1     Time: 0s
Parallel Phase #:23 Files: 23    Time: 69s
Restart  Phase #:24 Files: 1     Time: 0s
Parallel Phase #:25 Files: 11    Time: 39s
Restart  Phase #:26 Files: 1     Time: 0s
Serial   Phase #:27 Files: 1     Time: 0s
Restart  Phase #:28 Files: 1     Time: 0s
Serial   Phase #:30 Files: 1     Time: 0s
Serial   Phase #:31 Files: 257   Time: 15s
Serial   Phase #:32 Files: 1     Time: 0s
Restart  Phase #:33 Files: 1     Time: 0s
Serial   Phase #:34 Files: 1     Time: 3s
Restart  Phase #:35 Files: 1     Time: 0s
Restart  Phase #:36 Files: 1     Time: 0s
Serial   Phase #:37 Files: 4     Time: 44s
Restart  Phase #:38 Files: 1     Time: 0s
Parallel Phase #:39 Files: 13    Time: 50s
Restart  Phase #:40 Files: 1     Time: 0s
Parallel Phase #:41 Files: 10    Time: 6s
Restart  Phase #:42 Files: 1     Time: 0s
Serial   Phase #:43 Files: 1     Time: 6s
Restart  Phase #:44 Files: 1     Time: 0s
Serial   Phase #:45 Files: 1     Time: 39s
Serial   Phase #:46 Files: 1     Time: 0s
Restart  Phase #:47 Files: 1     Time: 0s
Serial   Phase #:48 Files: 1     Time: 99s
Restart  Phase #:49 Files: 1     Time: 0s
Serial   Phase #:50 Files: 1     Time: 113s
Restart  Phase #:51 Files: 1     Time: 0s
Serial   Phase #:52 Files: 1     Time: 21s
Restart  Phase #:53 Files: 1     Time: 0s
Serial   Phase #:54 Files: 1     Time: 162s
Restart  Phase #:55 Files: 1     Time: 0s
Serial   Phase #:56 Files: 1     Time: 48s
Restart  Phase #:57 Files: 1     Time: 0s
Serial   Phase #:58 Files: 1     Time: 154s
Restart  Phase #:59 Files: 1     Time: 0s
Serial   Phase #:60 Files: 1     Time: 242s
Restart  Phase #:61 Files: 1     Time: 0s
Serial   Phase #:62 Files: 1     Time: 36s
Restart  Phase #:63 Files: 1     Time: 0s
Serial   Phase #:64 Files: 1     Time: 2s
Serial   Phase #:65 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose -upgrade_mode_only > /app/oracle/catupgrd_datapatch_upgrade.log 2> /app/oracle/catupgrd_datapatch_upgrade.err
returned from sqlpatch
    Time: 21s
Serial   Phase #:66 Files: 1     Time: 31s
Serial   Phase #:68 Files: 1     Time: 0s
Serial   Phase #:69 Files: 1 Calling sqlpatch with LD_LIBRARY_PATH=/app/oracle/product/12.1.0/dbhome_1/lib; export LD_LIBRARY_PATH;/app/oracle/product/12.1.0/dbhome_1/perl/bin/perl -I /app/oracle/product/12.1.0/dbhome_1/rdbms/admin -I /app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch /app/oracle/product/12.1.0/dbhome_1/rdbms/admin/../../sqlpatch/sqlpatch.pl -verbose > /app/oracle/catupgrd_datapatch_normal.log 2> /app/oracle/catupgrd_datapatch_normal.err
returned from sqlpatch
    Time: 32s
Serial   Phase #:70 Files: 1     Time: 73s
Serial   Phase #:71 Files: 1     Time: 0s
Serial   Phase #:72 Files: 1     Time: 0s
Serial   Phase #:73 Files: 1     Time: 21s

Grand Total Time: 1620s

LOG FILES: (catupgrd*.log)

Upgrade Summary Report Located in:
/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/DEV18/upgrade/upg_summary.log

Grand Total Upgrade Time:    [0d:0h:27m:0s]

Review the upgrade summary log file


[oracle@vosap02 oracle]$ cat /app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/DEV18/upgrade/upg_summary.log


Oracle Database 12.1 Post-Upgrade Status Tool           12-01-2014 02:37:12

Component                               Current         Version  Elapsed Time
Name                                    Status          Number   HH:MM:SS

Oracle Server                          UPGRADED      12.1.0.2.0  00:09:22
JServer JAVA Virtual Machine              VALID      12.1.0.2.0  00:01:38
Oracle Real Application Clusters     OPTION OFF      12.1.0.2.0  00:00:02
Oracle Workspace Manager                  VALID      12.1.0.2.0  00:00:34
OLAP Analytic Workspace                   VALID      12.1.0.2.0  00:00:19
OLAP Catalog                         OPTION OFF      10.2.0.5.0  00:00:00
Oracle OLAP API                           VALID      12.1.0.2.0  00:00:22
Oracle XDK                                VALID      12.1.0.2.0  00:01:52
Oracle Text                               VALID      12.1.0.2.0  00:00:37
Oracle XML Database                       VALID      12.1.0.2.0  00:02:05
Oracle Database Java Packages             VALID      12.1.0.2.0  00:00:10
Oracle Multimedia                         VALID      12.1.0.2.0  00:02:33
Spatial                                UPGRADED      12.1.0.2.0  00:04:01
Final Actions                                                    00:00:50
Post Upgrade                                                     00:01:09

Total Upgrade Time: 00:25:57

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.12
Grand Total Upgrade Time:    [0d:0h:27m:0s]

Run post-upgrade steps

Upgrade timezone data to version 18

SQL> startup upgrade
ORACLE instance started.

Total System Global Area  880803840 bytes
Fixed Size                  2930416 bytes
Variable Size             469764368 bytes
Database Buffers          402653184 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.
SQL> SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_14.dat              14          0

SQL> EXEC DBMS_DST.BEGIN_UPGRADE(18);

PL/SQL procedure successfully completed.

SQL> SELECT PROPERTY_NAME, SUBSTR(property_value, 1, 30) value
FROM DATABASE_PROPERTIES
WHERE PROPERTY_NAME LIKE 'DST_%'
ORDER BY PROPERTY_NAME;  2    3    4

PROPERTY_NAME
--------------------------------------------------------------------------------
VALUE
--------------------------------------------------------------------------------
DST_PRIMARY_TT_VERSION
18

DST_SECONDARY_TT_VERSION
14

DST_UPGRADE_STATE
UPGRADE


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup;
ORACLE instance started.

Total System Global Area  880803840 bytes
Fixed Size                  2930416 bytes
Variable Size             469764368 bytes
Database Buffers          402653184 bytes
Redo Buffers                5455872 bytes
Database mounted.
Database opened.
SQL> set serverout on
SQL> VAR numfail number
BEGIN
DBMS_DST.UPGRADE_DATABASE(:numfail,
parallel => TRUE,
log_errors => TRUE,
log_errors_table => 'SYS.DST$ERROR_TABLE',
log_triggers_table => 'SYS.DST$TRIGGER_TABLE',
error_on_overlap_time => FALSE,
error_on_nonexisting_time => FALSE);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :numfail);
END;
/SQL>   2    3    4    5    6    7    8    9   10   11
Table list: "ROSODS"."STS_TEST"
Number of failures: 0
Table list: "RH_SCHEDULE"."SCHED_TRAIN_LOCATION"
Number of failures: 0
Table list: "RH_SCHEDULE"."SCHED_RPO"
Number of failures: 0
Table list: "RH_SCHEDULE"."SCHED_MASTER_TRAIN_BUNDLE"
Number of failures: 0
Table list: "RH_SCHEDULE"."SCHED_MASTER_TRAIN"
Number of failures: 0
Table list: "RH_SCHEDULE"."SCHED_CHANGE_REASON"
Number of failures: 0
Table list: "STG_OWNER"."STG_TRAIN_SCHED_RPO"
Number of failures: 0
Table list: "STG_OWNER"."STG_TRAIN_SCHEDULE_LOCATION"
Number of failures: 0
Table list: "STG_OWNER"."STG_TRAIN_SCHEDULE"
Number of failures: 0
Table list: "STG_OWNER"."STG_OUT_TRAIN_SCHED_MQ2_EXT"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_WEIGHBRIDGE_VEHICLE"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_WEIGHBRIDGE_READING"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_TRAIN_RUNNING"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_SPEED_RESTRICTION_LOC"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_SPEED_RESTRICTION"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_OUT_VEHICLES_TRIP_EXT"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_LOAD_DUMP"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_CONSIST_TRAIN"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_CONSIST_RAKE"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_CONSIST_AEI_VEHICLE"
Number of failures: 0
Table list: "STG_OWNER"."STAGING_CONSIST"
Number of failures: 0
Table list: "RPO"."STG_RPO_PLANNING_CHANGE"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_S"
Number of failures: 0
Table list: "GSMADMIN_INTERNAL"."AQ$_CHANGE_LOG_QUEUE_TABLE_L"
Number of failures: 0
Failures:0

PL/SQL procedure successfully completed.

SQL> VAR fail number
BEGIN
DBMS_DST.END_UPGRADE(:fail);
DBMS_OUTPUT.PUT_LINE('Failures:'|| :fail);
END;
/SQL>   2    3    4    5
An upgrade window has been successfully ended.
Failures:0

PL/SQL procedure successfully completed.

SQL> SELECT * FROM v$timezone_file;

FILENAME                VERSION     CON_ID
-------------------- ---------- ----------
timezlrg_18.dat              18          0

Recompile INVALID Objects

Run utlrp followed by utluiobj.sql


SQL> @utluiobj.sql
.
Oracle Database 12.1 Post-Upgrade Invalid Objects Tool 12-01-2014 19:21:50
.
This tool lists post-upgrade invalid objects that were not invalid
prior to upgrade (it ignores pre-existing pre-upgrade invalid objects).
.
Owner                     Object Name                     Object Type
.

PL/SQL procedure successfully completed.

A best practice is to gather system statistics 24-48 hours after the upgrade

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('start'); 

<< Run it for several hours during periods of normal workload>> 

SQL> exec DBMS_STATS.GATHER_SYSTEM_STATS('stop');

Upgrading from Oracle 11gR2 to Oracle 12c using Full Transportable Export and Import

$
0
0

The Transportable Tablespace (TTS) feature was introduced in Oracle 8i and the Cross-Platform Transportable Tablespace (XTTS) feature was introduced in Oracle 10g.

Since it only involved the copy of the data files of the tablespace at the OS level and the copy (and maybe conversion) of the data files over the network, it was potentially faster than the traditional Export/Import or Data Pump which required pretty much a block by block scan of every table in the database.

But Data Pump did have a number of advantages and TTS and XTTS did have some disadvantages – it was more complex to setup and use and had some limitations like objects in the SYSTEM and SYSAUX tablespaces could not be moved as well as the DBA had to ensure that the tablespaces being transported were ‘self-contained’. Certain applications when installed created objects in the SYSTEM schema residing in the SYSTEM tablespace and they would be ignored if we used the TTS method of migrating data.

The Full Transportable Export and Import feature combines the best features of TTS/XTTS and Data Pump.

It can help us in upgrading a database to Oracle 12c or migrating a database to a 12c Container Database (CDB) – it however requires the source database to be at least Oracle 11.2.0.3.

Full Transportable Export and Import considers tablespaces to be of two kinds – Administrative and User.

Administrative tablespaces include the tablespaces supplied by Oracle when we create a database – SYSTEM, SYSAUX, UNDO and TEMP. These tablespaces contain the procedures, packages, and seed data for the core Oracle database functionality and Oracle-provided database components such as Oracle Spatial, Oracle Text, OLAP, JAVAVM, and XML Database.

The User tablespaces are the tablespaces we create which holds the application data.

In Full Transportable Export/Import, the export process will extract the metadata for all objects contained in both user as well as administrative tablespaces.

In Oracle 12c, Oracle supplied objects are neither exported or imported by Data Pump. But Full Transportable Export/Import will use the Data Pump method to scan the data as well as meta data for any user defined objects residing in Administrative tablespaces and use TTS functionality for all other User tablespaces – they will be moved to the destination database as database file copies.

Here is a example of migrating as well as upgrading data from an Oracle 11.2.0.4 database to Oracle 12c (12.1.0.2) using Full Transportable Export and Import.

To compare this with a conventional Data Pump export, I took a full export of the database – note the time and the size of the export dump file.


Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /home/oracle/exp1.dmp
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Tue Dec 9 02:22:57 2014 elapsed 0 00:05:48


[oracle@csmsdc-vosap02 ~]$ ls -l /home/oracle/exp1.dmp
-rw-r----- 1 oracle oinstall 2043740160 Dec  9 02:22 /home/oracle/exp1.dmp


To test this, we create objects in the administrative tablespace – both SYSTEM and SYSAUX . (this is just for the test case – too be avoided!)

SQL> alter user arisbp quota unlimited on system;

User altered.

SQL> create table arisbp.mytables
  2  tablespace system
  3  as select * from all_tables;

Table created.

SQL> create table arisbp.myindexes
  2  tablespace sysaux
  3  as select * from all_indexes;

Table created.

These are the tablespaces which exist in the source 11.2.0.4 database


SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS
ARISBPDATA
ARISBPINDEX

The first thing we need to do is to make all the user tablespaces read only (downtime for upgrade and migration starts here)


SQL> alter tablespace ARISBPDATA read only;

Tablespace altered.

SQL> alter tablespace ARISBPINDEX read only;

Tablespace altered.

We have to use the TRANSPORTABLE=ALWAYS and FULL=Y clause to instruct Data Pump to treat this not as a conventional Export of a database but as a Full Transportable Export and Import.

We also have to use the VERSIONS parameter as well in this case if the COMPATIBLE parameter of the source database is not set to at least 12.0 and in this case it is not since the source database is 11.2.0.4.

Since the APEX related tables are located in an administrative tablespace (SYSAUX), we can see that the export process will load the data as well as metadata for these tables.

It will also provide a list of the data files belonging to the user tablespaces and these data files will need to be physically copied from the source to the target server.

Note the size of the export dump file – it is just 170 MB copmpared to over 2 GB when we did the conventional full export of the same database earlier.


$ expdp directory=exp_dir dumpfile=exp2.dmp full=y transportable=always logfile=exp_dir:full_export.log version=12

Export: Release 11.2.0.4.0 - Production on Tue Dec 9 19:03:49 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

Username: sys as sysdba
Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYS"."SYS_EXPORT_FULL_01":  sys/******** AS SYSDBA directory=exp_dir dumpfile=exp2.dmp full=y transportable=always logfile=exp_dir:full_export.log version=12
Estimate in progress using BLOCKS method...

…
…

. . exported "APEX_030200"."WWV_FLOW_LIST_OF_VALUES_DATA"  392.0 KB    4184 rows
. . exported "APEX_030200"."WWV_FLOW_MESSAGES$"          348.4 KB    3706 rows
. . exported "APEX_030200"."WWV_FLOW_PAGE_PLUG_TEMPLATES"  165.8 KB     166 rows
. . exported "APEX_030200"."WWV_FLOW_WORKSHEETS"         233.3 KB      30 rows
. . exported "SYSMAN"."MGMT_JOB_CRED_PARAMS"             56.64 KB     187 rows
. . exported "APEX_030200"."WWV_FLOW_CUSTOM_AUTH_SETUPS"  21.56 KB      11 rows

….
….

. . exported "SYSTEM"."REPCAT$_SITE_OBJECTS"                 0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_SNAPGROUP"                    0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_OBJECTS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_PARMS"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_REFGROUPS"           0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_SITES"               0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_TEMPLATE_TARGETS"             0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_AUTHORIZATIONS"          0 KB       0 rows
. . exported "SYSTEM"."REPCAT$_USER_PARM_VALUES"             0 KB       0 rows
. . exported "SYSTEM"."SQLPLUS_PRODUCT_PROFILE"              0 KB       0 rows
Master table "SYS"."SYS_EXPORT_FULL_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYS.SYS_EXPORT_FULL_01 is:
  /home/oracle/exp2.dmp
******************************************************************************
Datafiles required for transportable tablespace ARISBPDATA:
  /app/oracle/oradata/aristsbp/arisbpdata.266.818548233
Datafiles required for transportable tablespace ARISBPINDEX:
  /app/oracle/oradata/aristsbp/arisbpindex.267.818548235
Datafiles required for transportable tablespace USERS:
  /app/oracle/oradata/aristsbp/users.259.818547637
Job "SYS"."SYS_EXPORT_FULL_01" successfully completed at Tue Dec 9 19:08:11 2014 elapsed 0 00:04:03



[oracle@csmsdc-vosap02 ARIS12BP]$ ls -lrt /home/oracle/exp2.dmp
-rw-r----- 1 oracle oinstall 174755840 Dec  9 19:08 /home/oracle/exp2.dmp

Copy the data files of the ARISBPDATA, ARISBPINDEX and USERS tablespaces to the server hosting the Oracle 12.1.0.2 database

$ cp /app/oracle/oradata/aristsbp/arisbpdata.266.818548233 /app/oracle/oradata/ARIS12BP
$ cp /app/oracle/oradata/aristsbp/arisbpindex.267.818548235 /app/oracle/oradata/ARIS12BP
$ cp /app/oracle/oradata/aristsbp/users.259.818547637 /app/oracle/oradata/ARIS12BP

Now that we have finished copying the data files of the user tablespaces, in the source database we can now make the tablespaces read write.


$ sqlplus sys as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Dec 9 19:13:41 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> alter tablespace arisbpdata read write;

Tablespace altered.

SQL> alter tablespace arisbpindex read write;

Tablespace altered.

These are the tablespaces we have currenytly in the target 12.1.0.2 database


$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 9 19:17:18 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select tablespace_name from dba_tablespaces;

TABLESPACE_NAME
------------------------------
SYSTEM
SYSAUX
UNDOTBS1
TEMP
USERS

Create a directory object in the target database

SQL> create directory imp_dir as '/home/oracle';

Directory created.

Create the Import parameter file

directory=imp_dir
dumpfile=exp2.dmp
FULL=Y
VERSION=12
TRANSPORT_DATAFILES='/app/oracle/oradata/ARIS12BP/arisbpdata.266.818548233','/app/oracle/oradata/ARIS12BP/arisbpindex.267.818548235','/app/oracle/oradata/ARIS12BP/users.259.818547637'
REMAP_TABLESPACE=USERS:USERS_NEW
LOGFILE=imp_dir:full_import.log

When we run the import in the Oracle 12c database, we can see that the tables which existed in the administrative tablespaces in the source database have both meta data as well as table data included in the dump file – the MYTABLES and MYINDEXES tables we created in the source 11.2.0.4 database are being imported with the table data.

. . imported "ARISBP"."MYINDEXES"                        1.191 MB    5123 rows
. . imported "SYSMAN"."MGMT_METRICS"                     4.966 MB   19620 rows
. . imported "APEX_030200"."WWV_FLOW_STEPS"              788.2 KB    1755 rows
. . imported "SYSMAN"."MGMT_IP_REPORT_ELEM_PARAMS"       623.0 KB    1869 rows
. . imported "ARISBP"."MYTABLES"                         811.4 KB    3262 rows
. . imported "APEX_030200"."WWV_FLOW_LIST_ITEMS"         590.3 KB    3048 rows

Check the status of the imported tablespaces in the Oracle 12c database


$ sqlplus sys as sysdba

SQL*Plus: Release 12.1.0.2.0 Production on Tue Dec 9 20:15:20 2014

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select tablespace_name,status from dba_tablespaces;

TABLESPACE_NAME                STATUS
------------------------------ ---------
SYSTEM                         ONLINE
SYSAUX                         ONLINE
UNDOTBS1                       ONLINE
TEMP                           ONLINE
USERS                          ONLINE
ARISBPDATA                     ONLINE
ARISBPINDEX                    ONLINE
USERS_NEW                      ONLINE

Do an object count of the ARISBP schema in 11g source and 12c target – all objects are present

SQL> select object_type,count(*) from dba_objects where owner='ARISBP'
  2  group by cube(object_type);

OBJECT_TYPE               COUNT(*)
----------------------- ----------
                              2592
LOB                            433
VIEW                             8
INDEX                         1698
TABLE                          453

SQL> quit

Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

$ . oraenv
ORACLE_SID = [ARIS12BP] ? aristsbp1
The Oracle base for ORACLE_HOME=/app/oracle/product/11.2.0.4/dbhome_1 is /app/oracle
$ sqlplus sys as sysdba

SQL*Plus: Release 11.2.0.4.0 Production on Tue Dec 9 20:16:36 2014

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select object_type,count(*) from dba_objects where owner='ARISBP'
  2   group by cube(object_type);

OBJECT_TYPE           COUNT(*)
------------------- ----------
                          2592
LOB                        433
VIEW                         8
INDEX                     1698
TABLE                      453

We can also do the same 11g to 12c upgrade by using Full Transportable Export/Import in Data Pump network mode. So there is no export dump file in this case and we create a database link from 12c target to 11gR2 source.

The Import command in such a case would look like this:

$ impdp NETWORK_LINK=imp_link FULL=Y VERSION=12 TRANSPORTABLE=ALWAYS TRANSPORT_DATAFILES=’/app/oracle/oradata/ARIS12BP/arisbpdata.266.818548233′,’/app/oracle/oradata/ARIS12BP/arisbpindex.267.818548235′,’/app/oracle/oradata/ARIS12BP/users.259.818547637′ REMAP_TABLESPACE=USERS:USERS_NEW

Platform Migration and Database Upgrade from Oracle 9i to Oracle 11g using GoldenGate

$
0
0

Let us look at an example of using Oracle Golden Gate to achieve a near zero (not zero!) downtime for performing an upgrade from Oracle 9i (9.2.0.5) to Oracle 11g (11.2.0.4) as well as a platform migration from Solaris SPARC to Linux X86-64.

With no downtime for the application we have performed the following tasks:

  •  Installed Oracle GoldenGate on both source and target servers. (On source for the Oracle 9i environment we are using OGG 11.1.1.1.4 and on the target Oracle 11g environment we are using OGG 11.2.1.0.3)
  • Supplemental logging has been turned on at the database level for the source database
  • Supplemental logging has been enabled at the table level using the ADD TRANDATA or ADD SCHEMATRANDATA GoldenGate commands
  • Extract DDL capture has been enabled on the source
  • Configured the Manager process on both source and target
  • Created the Extract process on source
  • Created the Replicat process on target
  • Installed the 11.2.0.4 Oracle software and created the target 11g database with the required tablespaces and database parameters as the source database.Remember some parameters in Oracle 9i have been deprecated in 11g and certain new parameters have been added.

We need to be able to capture all changes in the database while the Oracle 9i database export is in progress. So we will start the capture Extract process or processes BEFORE we start the full database export.

We also then use the DBMS_FLASHBACK package to obtain the reference SCN number on which the consistent database export will be based. Changes which occur in the database post this SCN will not be captured in the export dump file but will be captured by the Golden Gate Extract process on source and applied by the Replicat process on the target.

Let us look at an example.

We have created a user called MIG_TEST and created some objects in this schema.

SQL> create user mig_test
  2  identified by mig_test
  3  default tablespace users
  4  temporary tablespace temp;

User created.

SQL> grant dba to mig_test;

Grant succeeded.

SQL> conn  mig_test/mig_test
Connected.
SQL> create table mytables as select * from all_tables;

Table created.

SQL> create table myindexes as select * from all_indexes;

Table created.




SQL> alter table mytables
  2  add constraint pk_mytables primary key (owner,table_name);

Table altered.

SQL> alter table myindexes
  2  add constraint pk_myindexes primary key (owner,index_name);

Table altered.


SQL> create table myobjects as select * from all_objects;

Table created.


SQL>  alter table myobjects
  2   add constraint pk_myobjects primary key (owner,object_name,object_type);

Table altered.

Obtain the current SCN on the source database and perform the full database export

SQL> SELECT dbms_flashback.get_system_change_number as current_scn
     from dual;

CURRENT_SCN
-----------
      63844


$ exp file=/app/oracle/oradump/testdb/exp/exp_mig.dmp full=y flashback_scn=63844 log= exp_mig.log

Copyright (c) 1982, 2002, Oracle Corporation.  All rights reserved.


Username: system
Password:

Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - 64bit Production
With the OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in US7ASCII character set and AL16UTF16 NCHAR character set

About to export the entire database ...
. exporting tablespace definitions
. exporting profiles
. exporting user definitions
. exporting roles
. exporting resource costs
. exporting rollback segment definitions
. exporting database links
. exporting sequence numbers
. exporting directory aliases
. exporting context namespaces
. exporting foreign function library names
. exporting PUBLIC type synonyms
. exporting private type synonyms

....

............

While the export is in progress, we make some changes to the objects in the MIG_TEST schema

SQL> update myobjects set object_type ='INDEX' where owner='MIG_TEST';

6 rows updated.

SQL> commit;

Commit complete.


SQL> delete mytables;

465 rows deleted.

SQL> commit;

Commit complete.


We can see that the MIG_TEST tables have been exported. But note that the last changes we made will not be part of the export as they were occurring in the database after the SCN 64844 which is the SCN the consistent export was based on.

So the MYTABLES table still has the 465 rows included in the export dump file even though we just deleted all the rows from the table.

 about to export MIG_TEST's tables via Conventional Path ...
. . exporting table                      MYINDEXES        474 rows exported
. . exporting table                      MYOBJECTS       5741 rows exported
. . exporting table                       MYTABLES        465 rows exported

On the 11.2.0.4 target database we perform the full database import.

Note the MYTABLES table still has 465 rows as we issued the DELETE statement after the export was started in the source database

. importing MIG_TEST's objects into MIG_TEST
. . importing table                    "MYINDEXES"        474 rows imported
. . importing table                    "MYOBJECTS"       5742 rows imported
. . importing table                     "MYTABLES"        465 rows imported

After the import has completed we now start the Replicat process on the target

Note we are using the AFTERSCN clause to tell the replicat to only apply all thos changes on the target which were generated on the source database after the SCN 63844

GGSCI (LINT0004) 4>  start replicat repmig aftercsn  63844

Sending START request to MANAGER ...
REPLICAT REPMIG starting


GGSCI (LINT0004) 5> info replicat repmig

REPLICAT   REPMIG    Last Started 2014-12-17 14:07   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:04 ago)
Log Read Checkpoint  File ./dirdat/cc000000
                     2014-12-17 13:54:40.150229  RBA 2586343

We can see that the replicat process has applied the required UPDATE and DELETE statements which were captured in the OGG trail file

GGSCI (LINT0004) 6> stats replicat repmig latest

Sending STATS request to REPLICAT REPMIG ...

Start of Statistics at 2014-12-17 14:08:09.

Replicating from MIG_TEST.MYOBJECTS to MIG_TEST.MYOBJECTS:

*** Latest statistics since 2014-12-17 14:07:33 ***
        Total inserts                                      0.00
        Total updates                                      6.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   6.00

Replicating from MIG_TEST.MYTABLES to MIG_TEST.MYTABLES:

*** Latest statistics since 2014-12-17 14:07:33 ***
        Total inserts                                      0.00
        Total updates                                      0.00
        Total deletes                                    465.00
        Total discards                                     0.00
        Total operations                                 465.00

 

We will now verify that there is no lag in the Replicat process and the source and target databases are in sync.

At this stage the outage will commence for the application.

We stop the extract and replicat processes and will need to disconnect application users who were connected to the original 9i database and point the application now to connect to the new Oracle 11g database.

The duration of the application outage will depend on how fast we can perform the disconnection of the users and reconfiguration of the application to connect to the upgraded database.

EM12c Cloud Control Metric Extensions – Monitor failed DBMS SCHEDULER Jobs

$
0
0

So what are Metric Extensions?

As the name implies metric extensions enables us to extend the out-of-the-box monitoring capabilities of Enterprise Manager and customize it according to the requirements of our specific environment.

Very often we may have some monitoring and alerting requirements for which metrics are not available in OEM and we are having  to run external scripts outside OEM.

For example we would like to be alerted and notified via OEM in the following circumstances:

  • Standby database lags behind the Primary database by more than 5 archive log files
  • There are failed DBMS jobs
  • In case some key program units become INVALID
  • In case a very important index becomes unusable or is dropped ….. and so on.

We can achieve all of the above by creating metric extensions in EM 12c.

The  EM 12c Metric Extensions note shows us how to create a metric extension to raise an incident and be notified if one of the critical DBMS_SCHEDULER_JOBS fails for any reason. The metric extension checks for the job status of ‘BROKEN’ to raise a critical event which will then send an email notification to the administrator.

We first create a metric extension as a deployable draft, test it, deploy it to required targets and also publish it. Metric extensions can also be part of a monitoring template.

The note shows the various screen shots of creating and deploying a metric extension in EM 12c.

 

 

Note – we can set up email notification for failed DBMS Scheduler Jobs outside of OEM as well using the DBMS_SCHEDULER package.

BEGIN
  DBMS_SCHEDULER.set_scheduler_attribute('email_server', 'mail.soorma.com:25');
  DBMS_SCHEDULER.set_scheduler_attribute('email_sender', 'gavin.soorma@soorma.com');
END;
/

BEGIN
DBMS_SCHEDULER.ADD_JOB_EMAIL_NOTIFICATION (
job_name => ‘TEST_JOB’,
recipients => ‘gavin.soorma@soorma.com’,
sender => ‘gavin.soorma@soorma.com’,
subject => ‘Scheduler Job Notification-%job_owner%.%job_name%-%event_type%’,
body => ‘%event_type% occurred at %event_timestamp%. %error_message%’,
events => ‘JOB_FAILED, JOB_BROKEN’);
END;
/

This is an example of the email alert we will be sent:

JOB_FAILED occurred at 05-JAN-15 01.32.22.808171 PM +08:00. ORA-01653: unable to extend table SYSTEM.MYOBJECTS by 128 in tablespace GAVIN
ORA-06512: at line 3

 

Minimal Downtime Cross Platform Migration and 12c Database Upgrade using Data Guard

$
0
0

This note describes the procedure used to perform a minimal downtime platform migration from Windows to Linux as well as a database upgrade from Oracle 11.2.0.4 to Oracle 12c (12.1.0.2).

We create a Data Guard physical standby database using the DUPLICATE FROM ACTIVE DATABASE feature, followed by a switchover and then we activate the standby and make it a primary database. Finally we upgrade the database to 12c using the catctl.pl perl utility with the parallel upgrade option.

By using Data Guard and the 12c command line parallel upgrade utility the entire operation has been performed with database outage of less than 30 minutes.

Read the note …

Performing a database clone using a Data Guard Snapshot Database

$
0
0

Some times we need to have an exact replica of the production database data to urgently test an issue encountered in production. If the production database is very large having a clone process running on the production server or taking a fresh full database backup just for the clone is also not desirable as well as they have potential performance implications.

So if we have a physical standby database in place (which is a block for block replica of production) why can we not use that database as a source for the clone – we don’t touch the production server.

We can briefly convert the physical standby database to a snapshot standby database and use the RMAN DUPLICATE FROM ACTIVE database to create the clone database without having to take a fresh full database backup.

This note describes the process of performing a database refresh or clone using the Data Guard Standby database as the source for the clone and not the production primary database.

Read the note ….

Oracle 12c New Feature – Privilege Analysis

$
0
0

In many databases we find that over the course of time certain users particularly application owner schemas and developer user accounts have been granted excessive privileges – more than what they need to do their job as developers or required for the application to perform normally.

Excessive privileges violate the basic security principle of least privilege.

In Oracle 12c now we have a package called DBMS_PRIVLEGE_CAPTURE through which we can identify unnecessary object and system privileges which have been granted and revoke privileges which have been granted but which have not yet been used.

The privilege analysis can be at the entire database level, or based on a particular role or context-specific – like a particular user in the database.

These are the main steps involved:

1) Create the Database, Role or Context privilege analysis policy via DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE
2) Start the analysis of used privileges via DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE
3)Stop the analysis when required via DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE
4) Generate the report via DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT
5) Examine the views like DBA_USED_SYSPRIVS, DBA_USED_OBJPRIVS,DBA_USED_PRIVS, DBA_UNUSED_PRIVS etc

In this example below we do context-based analysis – the role ‘DBA’ and the user ‘SH’.

SQL> alter session set container=sales;

Session altered.

SQL>  grant dba to sh;

Grant succeeded.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE(-
> name => 'AUDIT_DBA_SH',-
>  type => dbms_privilege_capture.g_role_and_context,-
> roles => role_name_list ('DBA'),-
> condition => 'SYS_CONTEXT (''USERENV'',''SESSION_USER'')=''SH''');

PL/SQL procedure successfully completed.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.ENABLE_CAPTURE(-
> name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.

SQL> conn sh/sh@sales
Connected.


SQL> alter user hr identified by hr;

User altered.

SQL> create table myobjects as select * from all_objects;
create table myobjects as select * from all_objects
             *
ERROR at line 1:
ORA-00955: name is already used by an existing object


SQL> drop table myobjects;

Table dropped.

SQL> alter tablespace users offline;

Tablespace altered.

SQL> alter tablespace users online;

Tablespace altered.



SQL> conn / as sysdba
Connected.


SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.DISABLE_CAPTURE(-
>  name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.

SQL> exec SYS.DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT(-
>  name => 'AUDIT_DBA_SH');

PL/SQL procedure successfully completed.


SQL> select name,type,enabled,roles,context
  2  from dba_priv_captures;

NAME           TYPE             E ROLES           CONTEXT
-------------- ---------------- - --------------- ------------------------------------------------------------
AUDIT_DBA_SH   ROLE_AND_CONTEXT N ROLE_ID_LIST(4) SYS_CONTEXT ('USERENV','SESSION_USER')='SH'


SQL> select username,sys_priv from dba_used_sysprivs;


USERNAME             SYS_PRIV
-------------------- ----------------------------------------
SH                   CREATE SESSION
SH                   ALTER USER
SH                   CREATE TABLE
SH                   ALTER TABLESPACE

Mass database upgrades using 12c Cloud Control

$
0
0

If we have a large number of databases to upgrade, we can use the Database Provisioning feature of 12c Cloud Control to automate and streamline the process and take away much of the manual tasks from the DBA.

Not is this time saving but also reduces the risk of failure in the upgrade process due to human errors.

The note provides an example of how two Oracle 11g databases residing on separate Linux servers are automatically upgraded to Oracle 12c via 12c Cloud Control.

Read the note on upgrading databases using 12c Cloud Control ….

Upgrade Grid Infrastructure 11g (11.2.0.3) to 12c (12.1.0.2)

$
0
0

I have recently tested the upgrade to RAC Grid Infrastructure 12.1.0.2 on my test RAC Oracle Virtualbox Linux 6.5 x86-64 environment.

The upgrade went very smoothly but we have to take a few things into account – some things have changed in 12.1.0.2 as compared to Grid Infrastructure 12.1.0.1.

The most notable change regards the Grid Infrastructure Management Repository (GIMR).

In 12.1.0.1 we had the option of installing the GIMR database – MGMTDB. But in 12.1.0.2 it is mandatory and the MGMTDB database is automatically created as part of the upgrade or initial installation process of 12.10.2 Grid Infrastructure.

The GIMR primarily stores historical Cluster Health Monitor metric data. It runs as a container database on a single node of the RAC cluster.

The problem I found is that the datafiles for the MGMTDB database are created on the same ASM disk group which holds the OCR and Voting Disk and there is a prerequisite that there is at least 4 GB of free space in that ASM disk group – or an error INS-43100 will be returned as shown in the figure below.

I had to cancel the upgrade process and add another disk to the +OCR ASM disk group to ensure that at least 4 GB of free space was available and after that the upgrade process went through very smoothly.

 

 

 

On both the nodes of the RAC cluster we will create the directory structure for the 12.1.0.2 Grid Infrastructure environment as this is an out-of-place upgrade.

Also it is very important to check the health of the RAC cluster before the upgrade (via the crsctl check cluster -all command) and also run the cluvfy.sh script to verify all the prerequisites for the 12c GI upgrade are in place.

[oracle@rac1 bin]$ crsctl query crs softwareversion rac1
Oracle Clusterware version on node [rac1] is [11.2.0.3.0]

[oracle@rac1 bin]$ crsctl query crs softwareversion rac2
Oracle Clusterware version on node [rac2] is [11.2.0.3.0]

[oracle@rac1 grid]$ ./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0/grid -dest_crshome /u02/app/12.1.0/grid -dest_version 12.1.0.2.0

 

 

 

 

[oracle@rac1 ~]$ crsctl check cluster -all
**************************************************************
rac1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************
rac2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
**************************************************************

[oracle@rac1 ~]$ crsctl query crs releaseversion
Oracle High Availability Services release version on the local node is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs softwareversion
Oracle Clusterware version on node [rac1] is [12.1.0.2.0]

[oracle@rac1 ~]$ crsctl query crs activeversion -f
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [0].

[oracle@rac1 ~]$ ps -ef |grep pmon
oracle 1278 1 0 14:53 ? 00:00:00 mdb_pmon_-MGMTDB
oracle 16354 1 0 14:22 ? 00:00:00 asm_pmon_+ASM1
oracle 17217 1 0 14:23 ? 00:00:00 ora_pmon_orcl1

[root@rac1 bin]# ./oclumon manage -get reppath

CHM Repository Path = +OCR/_MGMTDB/FD9B43BF6A646F8CE043B6A9E80A2815/DATAFILE/sysmgmtdata.269.873212089

[root@rac1 bin]# ./srvctl status mgmtdb -verbose
Database is enabled
Instance -MGMTDB is running on node rac1. Instance status: Open.

[root@rac1 bin]# ./srvctl config mgmtdb
Database unique name: _mgmtdb
Database name:
Oracle home:
Oracle user: oracle
Spfile: +OCR/_MGMTDB/PARAMETERFILE/spfile.268.873211787
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Type: Management
PDB name: rac_cluster
PDB service: rac_cluster
Cluster name: rac-cluster
Database instance: -MGMTDB

Oracle 12c New Feature Real-Time Database Monitoring

$
0
0

Real -Time Database Monitoring which is a new feature in Oracle 12c extends Real-Time SQL Monitoring which was a feature introduced in Oracle 11g. The main difference is related to the fact that SQL Monitoring only applies to a single SQL statement .

Very often we run batch jobs and those batch jobs in turn invoke many SQL statements. When batch jobs run slowly all of a sudden it becomes very difficult to identify which of those individual SQL statements which are part of the batch job are now contributing to the performance issue – or maybe batch jobs have started running slowly only after a database upgrade has been performed and we need to identify which particular SQL statement or statements have suffered from performance regressions after the upgrade.

The API used for Real-Time Database Monitoring is the DBMS_SQL_MONITOR package with the  BEGIN_OPERATION and END_OPERATION calls.

So what is a Database Operation?

A database operation is single or multiple SQL statements and/or PL/SQL blocks between two points in time.

Basically to monitor a database operation it needs to be given a name along with a begin and end point.

The database operation name along with its execution ID will help us identify the operation and we can use several views for this purpose like V$SQL_MONITOR as well as V$ACTIVE_SESSION_HISTORY via the DBOP_NAME and DBOP_EXEC_ID columns.

Let us look an example of monitoring database operations using Oracle 12c Database Express.

We create a file called mon.sql and will run it in the SH schema while using Database Express to monitor the operation.

The name of the database operation is DBOPS and we are running a number of SQL statements as part of the same database operation.

DECLARE
n NUMBER;
m  number;
BEGIN
n := dbms_sql_monitor.begin_operation(‘DBOPS’);
END;
/

drop table sales_copy;
CREATE TABLE SALES_COPY AS SELECT * FROM SALES;
INSERT INTO SALES_COPY SELECT * FROM SALES;
COMMIT;
DELETE SALES_COPY;
COMMIT;
SELECT * FROM SALES ;
select * from sales where cust_id=1234;

DECLARE
m NUMBER;
BEGIN
select dbop_exec_id into m from v$sql_monitor
where dbop_name=’DBOPS’
and status=’EXECUTING';
dbms_sql_monitor.end_operation(‘DBOPS’, m);
END;
/

From the Database Express 12c Performance menu > Performance Hub > Monitored SQL

In this figure we can see that the DBOPS database operation is still running.

Click the DBOPS link in the ID column

 

We can see the various SQL statements which are running as part of the operation and we can also see that one particular SQL is taking much more database time as compared to the other 3 SQL ID’s.

 

 

The DELETE SALES_COPY SQL statement is taking over 30 seconds of database time as compared to other statements which are taking around just a second of database time in comparison. It is consuming close to 2 million buffer gets as well.

So we know that for this particular database operation, which is the most costly single SQL statement.

 

 

 

We can now see that the database operation is finally complete and it has taken 42 seconds of database time.

Oracle Grid Infrastructure Patch Set Update 12.1.0.2.2 (Jan 2015)

$
0
0

This note details the process of applying the Grid Infrastructure 12.1.0.2 PSU Jan 2015 patch on a two node Linux x86-64 RAC cluster.

The patch required is 19954978 and this in turn will apply the 19769480 patch to the database home as well.

The patch is rolling applicable and so involves minimal downtime.

Before applying the patch we need to do a couple of things first (on both nodes).

  • Ensure opatch is a minimum version of 12.1.0.1.5 on both the Grid as well as Database home on both nodes.

[oracle@rac1 OPatch]$ ./opatch version
OPatch Version: 12.1.0.1.6

OPatch succeeded.

 

  • Create an OCM response file - again on both nodes

[oracle@rac1 dbhome_1]$ echo $ORACLE_HOME
/u02/app/oracle/product/12.1.0/dbhome_1

[oracle@rac1 dbhome_1]$ $ORACLE_HOME/OPatch/ocm/bin/emocmrsp -no_banner -output /tmp/file.rsp

Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:

You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
The OCM configuration response file (/tmp/file.rsp) was successfully created.

 

  • Create a directory for the PSU patch - make sure the directory has no contents.

[oracle@rac1 dbhome_1]$ cd /u02/oracle/software/
[oracle@rac1 software]$ mkdir GI_PSU_JAN15
[oracle@rac1 software]$ cd GI_PSU_JAN15/
[oracle@rac1 GI_PSU_JAN15]$ mv /media/sf_software/p19954978_121020_Linux-x86-64.zip .

 

Check for any patch conflicts

[root@rac1 OPatch]# ./opatchauto apply /u02/oracle/software/GI_PSU_JAN15/19954978 -analyze -ocmrf /tmp/file.rsp -oh /u02/app/12.1.0/grid
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.6
OUI Version : 12.1.0.2.0
Running from : /u02/app/12.1.0/grid

opatchauto log file: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/19954978/opatch_gi_2015-03-02_21-46-22_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

OCM RSP file has been ignored in analyze mode.

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u02/oracle/software/GI_PSU_JAN15/19954978
Grid Infrastructure Patch(es): 19769473 19769479 19769480 19872484
DB Patch(es): 19769479 19769480

Patch Validation: Successful
User specified following Grid Infrastructure home:
/u02/app/12.1.0/grid

Analyzing patch(es) on “/u02/app/12.1.0/grid” …
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769473″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769479″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769480″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19872484″ successfully analyzed on “/u02/app/12.1.0/grid” for apply.

Apply Summary:
Following patch(es) are successfully analyzed:
GI Home: /u02/app/12.1.0/grid: 19769473,19769479,19769480,19872484

opatchauto succeeded.

[root@rac1 OPatch]# ./opatchauto apply /u02/oracle/software/GI_PSU_JAN15/19954978 -analyze -ocmrf /tmp/file.rsp -database orcl
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.6
OUI Version : 12.1.0.2.0
Running from : /u02/app/oracle/product/12.1.0/dbhome_1

opatchauto log file: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/19954978/opatch_gi_2015-03-02_21-49-41_analyze.log

NOTE: opatchauto is running in ANALYZE mode. There will be no change to your system.

OCM RSP file has been ignored in analyze mode.

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u02/oracle/software/GI_PSU_JAN15/19954978
Grid Infrastructure Patch(es): 19769473 19769479 19769480 19872484
DB Patch(es): 19769479 19769480

Patch Validation: Successful
User specified the following DB home(s) for this session:
/u02/app/oracle/product/12.1.0/dbhome_1

Analyzing patch(es) on “/u02/app/oracle/product/12.1.0/dbhome_1″ …
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769479″ successfully analyzed on “/u02/app/oracle/product/12.1.0/dbhome_1″ for apply.
Patch “/u02/oracle/software/GI_PSU_JAN15/19954978/19769480″ successfully analyzed on “/u02/app/oracle/product/12.1.0/dbhome_1″ for apply.

[WARNING] The local database(s) on “/u02/app/oracle/product/12.1.0/dbhome_1″ is not running. SQL changes, if any, cannot be analyzed.

Apply Summary:
Following patch(es) are successfully analyzed:
DB Home: /u02/app/oracle/product/12.1.0/dbhome_1: 19769479,19769480

opatchauto succeeded.

 

Ensure that you have enough disk space for the Jan 2015 PSU!!

Although the patch is around 800 MB, when we unzip the patch it occupies over 3 GB of disk space. In addition the patch application requires over 12 GB of free disk space on the Grid Infrastructure home and over 5 GB of free space for the Database home.

Otherwise the patch will fail with an error like the one shown below:

Applying patch(es) to “/u02/app/oracle/product/12.1.0/dbhome_1″ …
Command “/u02/app/oracle/product/12.1.0/dbhome_1/OPatch/opatch napply -phBaseFile /tmp/OraDB12Home1_oracle_patchList -local -invPtrLoc /u02/app/12.1.0/grid/oraInst.loc -oh /u02/app/oracle/product/12.1.0/dbhome_1 -silent -ocmrf /tmp/file.rsp” execution failed:
UtilSession failed:
Prerequisite check “CheckSystemSpace” failed.

I had to create another mount point with adequate disk space and create a soft link from the existing mount point holding the GI and Database Oracle Homes to the new mount point.

 

  • Apply the Jan 2015 PSU patch

[root@rac1 OPatch]# ./opatchauto apply /u01/app/GI_PSU_JAN15/19954978 -ocmrf /tmp/file.rsp
OPatch Automation Tool
Copyright (c)2014, Oracle Corporation. All rights reserved.

OPatchauto Version : 12.1.0.1.6
OUI Version : 12.1.0.2.0
Running from : /u02/app/12.1.0/grid

opatchauto log file: /u02/app/12.1.0/grid/cfgtoollogs/opatchauto/19954978/opatch_gi_2015-03-03_11-53-43_deploy.log

Parameter Validation: Successful

Configuration Validation: Successful

Patch Location: /u01/app/GI_PSU_JAN15/19954978
Grid Infrastructure Patch(es): 19769473 19769479 19769480 19872484
DB Patch(es): 19769479 19769480

Patch Validation: Successful
Grid Infrastructure home:
/u02/app/12.1.0/grid
DB home(s):
/u02/app/oracle/product/12.1.0/dbhome_1

Performing prepatch operations on RAC Home (/u02/app/oracle/product/12.1.0/dbhome_1) … Successful
Following database(s) and/or service(s) were stopped and will be restarted later during the session: orcl

Applying patch(es) to “/u02/app/oracle/product/12.1.0/dbhome_1″ …
Patch “/u01/app/GI_PSU_JAN15/19954978/19769479″ successfully applied to “/u02/app/oracle/product/12.1.0/dbhome_1″.
Patch “/u01/app/GI_PSU_JAN15/19954978/19769480″ successfully applied to “/u02/app/oracle/product/12.1.0/dbhome_1″.

Performing prepatch operations on CRS Home… Successful

Applying patch(es) to “/u02/app/12.1.0/grid” …
Patch “/u01/app/GI_PSU_JAN15/19954978/19769473″ successfully applied to “/u02/app/12.1.0/grid”.
Patch “/u01/app/GI_PSU_JAN15/19954978/19769479″ successfully applied to “/u02/app/12.1.0/grid”.
Patch “/u01/app/GI_PSU_JAN15/19954978/19769480″ successfully applied to “/u02/app/12.1.0/grid”.
Patch “/u01/app/GI_PSU_JAN15/19954978/19872484″ successfully applied to “/u02/app/12.1.0/grid”.

Performing postpatch operations on CRS Home… Successful

Performing postpatch operations on RAC Home (/u02/app/oracle/product/12.1.0/dbhome_1) … Successful

SQL changes, if any, are applied successfully on the following database(s): orcl

Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u02/app/12.1.0/grid: 19769473,19769479,19769480,19872484
DB Home: /u02/app/oracle/product/12.1.0/dbhome_1: 19769479,19769480

opatchauto succeeded.

 

  • Loading Modified SQL Files into the Database

On only one node of the RAC cluster we need to start the database instance and execute the following command:

cd $ORACLE_HOME/OPatch

./datapatch -verbose

 

  • Verify the patch installation

SQL> select patch_id, VERSION, ACTION,STATUS, DESCRIPTION from dba_registry_sqlpatch;

PATCH_ID VERSION ACTION STATUS
———- ——————– ————— —————
DESCRIPTION
——————————————————————————–
19769480 12.1.0.2 APPLY SUCCESS
Database Patch Set Update : 12.1.0.2.2 (19769480)

[oracle@rac1 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.6
Copyright (c) 2015, Oracle Corporation. All rights reserved.

Oracle Home : /u02/app/oracle/product/12.1.0/dbhome_1
Central Inventory : /u01/app/oraInventory
from : /u02/app/oracle/product/12.1.0/dbhome_1/oraInst.loc
OPatch version : 12.1.0.1.6
OUI version : 12.1.0.2.0
Log file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/opatch2015-03-03_15-41-19PM_1.log

Lsinventory Output file location : /u02/app/oracle/product/12.1.0/dbhome_1/cfgtoollogs/opatch/lsinv/lsinventory2015-03-03_15-41-19PM.txt

——————————————————————————–
Installed Top-level Products (1):

Oracle Database 12c 12.1.0.2.0
There are 1 products installed in this Oracle Home.

Interim patches (2) :

Patch 19769480 : applied on Tue Mar 03 07:28:45 WST 2015
Unique Patch ID: 18350083
Patch description: “Database Patch Set Update : 12.1.0.2.2 (19769480)”
Created on 15 Dec 2014, 06:54:52 hrs PST8PDT
Bugs fixed:
20284155, 19157754, 18885870, 19303936, 19708632, 19371175, 18618122
19329654, 19075256, 19074147, 19044962, 19289642, 19068610, 18988834
19028800, 19561643, 19058490, 19390567, 18967382, 19174942, 19174521
19176223, 19501299, 19178851, 18948177, 18674047, 19723336, 19189525
19001390, 19176326, 19280225, 19143550, 18250893, 19180770, 19155797
19016730, 19185876, 18354830, 19067244, 18845653, 18849537, 18964978
19065556, 19440586, 19439759, 19024808, 18952989, 18990693, 19052488
19189317, 19409212, 19124589, 19154375, 19279273, 19468347, 19054077
19048007, 19248799, 19018206, 18921743, 14643995, 18456643, 16870214
19434529, 19706965, 17835294, 20074391, 18791688, 19197175, 19134173
19174430, 19050649, 19769480, 19077215, 19577410, 18288842, 18436647
19520602, 19149990, 19076343, 19195895, 18610915, 19068970, 19518079
19304354, 19001359, 19676905, 19309466, 19382851, 18964939, 16359751
19022470, 19532017, 19597439, 18674024, 19430401

Patch 19769479 : applied on Tue Mar 03 07:28:27 WST 2015
Unique Patch ID: 18256426
Patch description: “OCW Patch Set Update : 12.1.0.2.2 (19769479)”
Created on 22 Dec 2014, 20:20:11 hrs PST8PDT
Bugs fixed:
19700294, 19164099, 19331454, 18589889, 19139608, 19280860, 18508710
18955644, 19061429, 19146822, 18798432, 19133945, 19341538, 18946768
19135521, 19537762, 19361757, 19187207, 19302350, 19130141, 16286734
19699720, 19168690, 19266658, 18762843, 18899171, 18945249, 19045143
19146980, 19244316, 19184799, 19471722, 18634372, 19027351, 19205086
18707416, 19184188, 19131709, 19281106, 19537547, 18862203, 19079087
19031737, 20006646, 18991776, 18439295, 19380733, 19150517, 19148367
18968981, 20231741, 18943696, 19217019, 18135723, 19163425, 19524857
18849021, 18730096, 18890943, 18975620, 19205617, 18861196, 19154753
17940721, 19150313, 18843054, 18708349, 19522313, 18748932, 18835283
18953639, 19184765, 19499021, 19067804, 19046190, 19371270, 19051385
19318983, 19209951, 19054979, 19050688, 19154673, 18752378, 19226141
19053891, 18871287, 19150088, 18998228, 18922918, 18980002, 19013444
19683886, 19234177, 18956780, 18998379, 20157569, 18777835, 19273577
19026993, 17338864, 19367276, 19075747, 19513650, 18990354, 19288396
19702758, 19427050, 18952577, 19414274, 19127078, 19147513, 18910443
20053557, 19473088, 19315567, 19148982, 18290252, 19178517, 18813323
19500293, 19529729, 18643483, 19455563, 19134098, 18523468, 19277814
19319904, 18703978, 19071526, 18536826, 18965694, 19703246, 19292605
19226858, 18850051, 19602208, 19192901, 18417590, 19370739, 18920408
18636884, 18776786, 18989446, 19148793, 19043795, 19585454, 18260170
18317489, 19479503, 19029647, 19179158, 18919682, 18901356, 19140712
19807548, 19124972, 18678829, 18910748, 18849896, 19147509, 19076165
18953878, 19273758, 19498411, 18964974, 18999195, 18759724, 18835366
19459023, 19184276, 19013789, 19207286, 18950232, 19680763, 19259765
19066844, 19148791, 19234907, 19538714, 19449737, 19649640, 18962892
19062675, 19187515, 19513969, 19513888, 19230771, 18859710, 19504641
19453778, 19341481, 19343245, 18304090, 19314048, 19473851, 19068333
18834934, 18843572, 19241655, 19470791, 19458082, 18242738, 18894342
19185148, 18945435, 18372060, 19232454, 18953889, 18541110, 19319192
19023430, 19204743, 19140711, 19259290, 19178629, 19045388, 19304104
19241857, 19522571, 19140891, 19076778, 18875012, 19270660, 19457575
19066699, 18861564, 19021575, 19069755, 19273760, 18715884, 19225265
19584688, 18798573, 19018001, 19325701, 19292272, 18819158, 19270956
19068003, 18937186, 19049721, 19368917, 19222693, 18700893, 18406774
18868829, 19010177, 19141785, 19163887, 18852058, 18715868, 19538241, 19804032

Rac system comprising of multiple nodes
Local node = rac1
Remote node = rac2

——————————————————————————–

OPatch succeeded.
[oracle@rac1 OPatch]$

[oracle@rac1 OPatch]$ . oraenv
ORACLE_SID = [orcl1] ? +ASM1
The Oracle base remains unchanged with value /u02/app/oracle
[oracle@rac1 OPatch]$ cd /u02/app/12.1.0/grid/OPatch
[oracle@rac1 OPatch]$ ./opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.6
Copyright (c) 2015, Oracle Corporation. All rights reserved.

Oracle Home : /u02/app/12.1.0/grid
Central Inventory : /u01/app/oraInventory
from : /u02/app/12.1.0/grid/oraInst.loc
OPatch version : 12.1.0.1.6
OUI version : 12.1.0.2.0
Log file location : /u02/app/12.1.0/grid/cfgtoollogs/opatch/opatch2015-03-03_15-43-45PM_1.log

Lsinventory Output file location : /u02/app/12.1.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2015-03-03_15-43-45PM.txt

——————————————————————————–
Installed Top-level Products (1):

Oracle Grid Infrastructure 12c 12.1.0.2.0
There are 1 products installed in this Oracle Home.

Interim patches (4) :

Patch 19872484 : applied on Tue Mar 03 09:54:54 WST 2015
Unique Patch ID: 18291456
Patch description: “WLM Patch Set Update: 12.1.0.2.2 (19872484)”
Created on 2 Dec 2014, 23:18:41 hrs PST8PDT
Bugs fixed:
19016964, 19582630

Patch 19769480 : applied on Tue Mar 03 09:54:49 WST 2015
Unique Patch ID: 18350083
Patch description: “Database Patch Set Update : 12.1.0.2.2 (19769480)”
Created on 15 Dec 2014, 06:54:52 hrs PST8PDT
Bugs fixed:
20284155, 19157754, 18885870, 19303936, 19708632, 19371175, 18618122
19329654, 19075256, 19074147, 19044962, 19289642, 19068610, 18988834
19028800, 19561643, 19058490, 19390567, 18967382, 19174942, 19174521
19176223, 19501299, 19178851, 18948177, 18674047, 19723336, 19189525
19001390, 19176326, 19280225, 19143550, 18250893, 19180770, 19155797
19016730, 19185876, 18354830, 19067244, 18845653, 18849537, 18964978
19065556, 19440586, 19439759, 19024808, 18952989, 18990693, 19052488
19189317, 19409212, 19124589, 19154375, 19279273, 19468347, 19054077
19048007, 19248799, 19018206, 18921743, 14643995, 18456643, 16870214
19434529, 19706965, 17835294, 20074391, 18791688, 19197175, 19134173
19174430, 19050649, 19769480, 19077215, 19577410, 18288842, 18436647
19520602, 19149990, 19076343, 19195895, 18610915, 19068970, 19518079
19304354, 19001359, 19676905, 19309466, 19382851, 18964939, 16359751
19022470, 19532017, 19597439, 18674024, 19430401

Patch 19769479 : applied on Tue Mar 03 09:54:10 WST 2015
Unique Patch ID: 18256426
Patch description: “OCW Patch Set Update : 12.1.0.2.2 (19769479)”
Created on 22 Dec 2014, 20:20:11 hrs PST8PDT
Bugs fixed:
19700294, 19164099, 19331454, 18589889, 19139608, 19280860, 18508710
18955644, 19061429, 19146822, 18798432, 19133945, 19341538, 18946768
19135521, 19537762, 19361757, 19187207, 19302350, 19130141, 16286734
19699720, 19168690, 19266658, 18762843, 18899171, 18945249, 19045143
19146980, 19244316, 19184799, 19471722, 18634372, 19027351, 19205086
18707416, 19184188, 19131709, 19281106, 19537547, 18862203, 19079087
19031737, 20006646, 18991776, 18439295, 19380733, 19150517, 19148367
18968981, 20231741, 18943696, 19217019, 18135723, 19163425, 19524857
18849021, 18730096, 18890943, 18975620, 19205617, 18861196, 19154753
17940721, 19150313, 18843054, 18708349, 19522313, 18748932, 18835283
18953639, 19184765, 19499021, 19067804, 19046190, 19371270, 19051385
19318983, 19209951, 19054979, 19050688, 19154673, 18752378, 19226141
19053891, 18871287, 19150088, 18998228, 18922918, 18980002, 19013444
19683886, 19234177, 18956780, 18998379, 20157569, 18777835, 19273577
19026993, 17338864, 19367276, 19075747, 19513650, 18990354, 19288396
19702758, 19427050, 18952577, 19414274, 19127078, 19147513, 18910443
20053557, 19473088, 19315567, 19148982, 18290252, 19178517, 18813323
19500293, 19529729, 18643483, 19455563, 19134098, 18523468, 19277814
19319904, 18703978, 19071526, 18536826, 18965694, 19703246, 19292605
19226858, 18850051, 19602208, 19192901, 18417590, 19370739, 18920408
18636884, 18776786, 18989446, 19148793, 19043795, 19585454, 18260170
18317489, 19479503, 19029647, 19179158, 18919682, 18901356, 19140712
19807548, 19124972, 18678829, 18910748, 18849896, 19147509, 19076165
18953878, 19273758, 19498411, 18964974, 18999195, 18759724, 18835366
19459023, 19184276, 19013789, 19207286, 18950232, 19680763, 19259765
19066844, 19148791, 19234907, 19538714, 19449737, 19649640, 18962892
19062675, 19187515, 19513969, 19513888, 19230771, 18859710, 19504641
19453778, 19341481, 19343245, 18304090, 19314048, 19473851, 19068333
18834934, 18843572, 19241655, 19470791, 19458082, 18242738, 18894342
19185148, 18945435, 18372060, 19232454, 18953889, 18541110, 19319192
19023430, 19204743, 19140711, 19259290, 19178629, 19045388, 19304104
19241857, 19522571, 19140891, 19076778, 18875012, 19270660, 19457575
19066699, 18861564, 19021575, 19069755, 19273760, 18715884, 19225265
19584688, 18798573, 19018001, 19325701, 19292272, 18819158, 19270956
19068003, 18937186, 19049721, 19368917, 19222693, 18700893, 18406774
18868829, 19010177, 19141785, 19163887, 18852058, 18715868, 19538241, 19804032

Patch 19769473 : applied on Tue Mar 03 09:52:59 WST 2015
Unique Patch ID: 18256364
Patch description: “ACFS Patch Set Update : 12.1.0.2.2 (19769473)”
Created on 2 Dec 2014, 23:02:26 hrs PST8PDT
Bugs fixed:
19452723, 19078259, 19919907, 18900953, 19127216, 18934139, 19844362
19335268, 18951113, 18899600, 18851012, 19149476, 19517835, 19428756
19183802, 19013966, 19051391, 19690653, 19195735, 19355146, 19001684
19509898, 19053182, 19644505, 19593769, 19610001, 19475588, 19353057
18957085, 19279106, 19270227, 19201087, 19184398, 19649858, 19450090
19502657, 19859183, 19557156, 18877486, 19528981, 18510745, 18915417
19134464, 19060056, 18955907

Patch level status of Cluster nodes :

Patching Level Nodes
————– —–
2888253033 rac1,rac2

——————————————————————————–

OPatch succeeded.

Oracle 12c Partitioning New Features

$
0
0

Online Move Partition

In Oracle 12c we can now move as well as compress partitions online while DML transactions on the partitioned table are in progress.

In earlier versions we would get an error like the one shown below if we attempted to move a partition while a DML statement on the partitioned table was in progress.

ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired

This is tied in to the 12c new feature related to Information Lifecycle Management where tables (and partitions) can be moved to low cost storage and/or compressed as part of an ILM policy. So we would not like to impact any DML statements which are in progress when the partitions are being moved or compressed – hence the online feature.

Another feature in 12c is that this online partition movement will not make the associated partitioned indexes left in an unusable state. The UPDATE INDEXES ONLINE clause will maintain the global and local indexes on the table.

SQL> ALTER TABLE sales MOVE PARTITION sales_q2_1998 TABLESPACE users
2  UPDATE INDEXES ONLINE;

Table altered.

 

Interval Reference Partitioning

In Oracle 11g Interval as well as Reference partitioning methods were introduced. In 12c we take this one step further and combine both those partitioning methods into one. So we can now have a child table to be referenced partitioned based on a parent table which has interval partitioning defined for it.

So two things to keep in mind.

Whenever an interval partition is created in the parent table a partition is also created in the referenced child table and the  partition name inherited from the parent table.

Partitions in the child table corresponding to partitions in the parent table are created when rows are inserted into the child table.

Let us look an example using the classic ORDERS and ORDER_ITEMS table which have a parent-child relationship and the parent ORDERS table has been interval partitioned.

CREATE TABLE "OE"."ORDERS_PART"
 (    
"ORDER_ID" NUMBER(12,0) NOT NULL,
"ORDER_DATE" TIMESTAMP (6)  CONSTRAINT "ORDER_PART_DATE_NN" NOT NULL ENABLE,
"ORDER_MODE" VARCHAR2(8),
"CUSTOMER_ID" NUMBER(6,0) ,
"ORDER_STATUS" NUMBER(2,0),
"ORDER_TOTAL" NUMBER(8,2),
"SALES_REP_ID" NUMBER(6,0),
"PROMOTION_ID" NUMBER(6,0),
CONSTRAINT ORDERS_PART_pk PRIMARY KEY (ORDER_ID)
)
PARTITION BY RANGE (ORDER_DATE)
INTERVAL (NUMTOYMINTERVAL(1,'YEAR'))
(PARTITION P_2006 VALUES LESS THAN (TIMESTAMP'2007-01-01 00:00:00 +00:00'),
PARTITION P_2007 VALUES LESS THAN (TIMESTAMP'2008-01-01 00:00:00 +00:00'),
PARTITION P_2008 VALUES LESS THAN (TIMESTAMP'2009-01-01 00:00:00 +00:00')
)
;

CREATE TABLE "OE"."ORDER_ITEMS_PART"
(    
"ORDER_ID" NUMBER(12,0) NOT NULL,
"LINE_ITEM_ID" NUMBER(3,0) NOT NULL ENABLE,
"PRODUCT_ID" NUMBER(6,0) NOT NULL ENABLE,
"UNIT_PRICE" NUMBER(8,2),
"QUANTITY" NUMBER(8,0),
CONSTRAINT "ORDER_ITEMS_PART_FK" FOREIGN KEY ("ORDER_ID")
REFERENCES "OE"."ORDERS_PART" ("ORDER_ID") ON DELETE CASCADE )
PARTITION BY REFERENCE (ORDER_ITEMS_PART_FK)
;

Note the partitions in the parent table

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDERS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008

We can see that the child table has inherited the same partitions from the parent table

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDER_ITEMS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008

We now insert a new row into the table which leads to the creation of a new partition automatically

SQL> INSERT INTO ORDERS_PART
  2   VALUES
  3   (9999,'17-MAR-15 01.00.00.000000 PM', 'DIRECT',147,5,1000,163,NULL);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDERS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008
SYS_P301

Note at this point the child table still has only 3 partitions and a new partition corresponding to the parent table will only be created when rows are inserted into the child table.

We now insert some rows into the child table – note that the row insertions leads to a new partition being created in the child table corresponding to the parent table.

SQL> INSERT INTO ORDER_ITEMS_PART
  2  VALUES
  3  (9999,1,2289,10,100);

1 row created.

SQL> INSERT INTO ORDER_ITEMS_PART
  2   VALUES
  3  (9999,2,2268,500,1);

1 row created.

SQL> COMMIT;

Commit complete.

SQL> SELECT PARTITION_NAME FROM USER_TAB_PARTITIONS WHERE TABLE_NAME='ORDER_ITEMS_PART';

PARTITION_NAME
--------------------------------------------------------------------------------------------------------------------------------
P_2006
P_2007
P_2008
SYS_P301

TRUNCATE CASCADE

In Oracle 12c we can add the CASCADE option to the TRUNCATE TABLE or ALTER TABLE TRUNCATE PARTITION commands.

The CASCADE option will truncate all child tables which reference the parent table and also where the referential constraint has been created with the ON DELETE CASCADE option.

The TRUNCATE CASCADE when used at the partition level in a reference partition model will also cascade to the partitions in the child table as shown in the example below.

SQL> alter table orders_part truncate partition SYS_P301 cascade;

Table truncated.


SQL> select count(*) from orders_part partition (SYS_P301);

  COUNT(*)
----------
         0

SQL>  select count(*) from order_items_part partition (SYS_P301);

  COUNT(*)
----------
         0

Multi-Partition Maintenance Operations

In Oracle 12c we can add, truncate or drop multiple partitions as part of a single operation.

In versions prior to 12c, the SPLIT and MERGE PARTITION operations could only be carried out on two partitions at a time. If we had a table with 10 partitions which say we needed to merge, we had to issue 9 separate DDL statements

Now with a single command we can roll out data into smaller partitions or roll up data into a larger partition.

CREATE TABLE sales
( prod_id       NUMBER(6)
, cust_id       NUMBER
, time_id       DATE
, channel_id    CHAR(1)
, promo_id      NUMBER(6)
, quantity_sold NUMBER(3)
, amount_sold   NUMBER(10,2)
)
PARTITION BY RANGE (time_id)
( PARTITION sales_q1_2014 VALUES LESS THAN (TO_DATE('01-APR-2014','dd-MON-yyyy'))
, PARTITION sales_q2_2014 VALUES LESS THAN (TO_DATE('01-JUL-2014','dd-MON-yyyy'))
, PARTITION sales_q3_2014 VALUES LESS THAN (TO_DATE('01-OCT-2014','dd-MON-yyyy'))
, PARTITION sales_q4_2014 VALUES LESS THAN (TO_DATE('01-JAN-2015','dd-MON-yyyy'))
);


ALTER TABLE sales ADD
PARTITION sales_q1_2015 VALUES LESS THAN (TO_DATE('01-APR-2015','dd-MON-yyyy')),
PARTITION sales_q2_2015 VALUES LESS THAN (TO_DATE('01-JUL-2015','dd-MON-yyyy')),
PARTITION sales_q3_2015 VALUES LESS THAN (TO_DATE('01-OCT-2015','dd-MON-yyyy')),
PARTITION sales_q4_2015 VALUES LESS THAN (TO_DATE('01-JAN-2016','dd-MON-yyyy'));


SQL>  ALTER TABLE sales MERGE PARTITIONS sales_q1_2015,sales_q2_2015,sales_q3_2015,sales_q4_2015  INTO PARTITION sales_2015;

Table altered.

SQL>  ALTER TABLE sales SPLIT PARTITION sales_2015 INTO
  2  (PARTITION sales_q1_2015 VALUES LESS THAN (TO_DATE('01-APR-2015','dd-MON-yyyy')),
  3  PARTITION sales_q2_2015 VALUES LESS THAN (TO_DATE('01-JUL-2015','dd-MON-yyyy')),
  4  PARTITION sales_q3_2015 VALUES LESS THAN (TO_DATE('01-OCT-2015','dd-MON-yyyy')),
  5  PARTITION sales_q4_2015);

Table altered.

Partial Indexing

In Oracle 12c we can now have a case where only certain partitions of the table are indexed while the other partitions do not have any indexes. For example we may want the recent partitions which are subject to lots of OLTP type operations to not have any indexes in order to speed up insert activity while the older partitions of the table are subject to DSS type queries and would benefit from indexing.

We can turn indexing on or off at the table level and then enable or disable it selectively at the partition level.

Have a look at the example below.

CREATE TABLE "SH"."SALES_12C"
(
"PROD_ID" NUMBER NOT NULL ENABLE,
"CUST_ID" NUMBER NOT NULL ENABLE,
"TIME_ID" DATE NOT NULL ENABLE,
"CHANNEL_ID" NUMBER NOT NULL ENABLE,
"PROMO_ID" NUMBER NOT NULL ENABLE,
"QUANTITY_SOLD" NUMBER(10,2) NOT NULL ENABLE,
"AMOUNT_SOLD" NUMBER(10,2) NOT NULL ENABLE
) 
TABLESPACE "EXAMPLE"
INDEXING OFF
PARTITION BY RANGE ("TIME_ID")
(PARTITION "SALES_1995"  VALUES LESS THAN (TO_DATE(' 1996-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1996"  VALUES LESS THAN (TO_DATE(' 1997-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1997"  VALUES LESS THAN (TO_DATE(' 1998-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1998"  VALUES LESS THAN (TO_DATE(' 1999-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_1999"  VALUES LESS THAN (TO_DATE(' 2000-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) ,
PARTITION "SALES_2000"  VALUES LESS THAN (TO_DATE(' 2001-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON,
PARTITION "SALES_2001"  VALUES LESS THAN (TO_DATE(' 2002-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON,
PARTITION "SALES_2002"  VALUES LESS THAN (TO_DATE(' 2003-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS')) INDEXING ON
 )
;

Create a local partitioned index on the table and note the size of the local index.

SQL> CREATE INDEX SALES_12C_IND ON SALES_12C (TIME_ID) LOCAL;

Index created.


SQL> SELECT SUM(BYTES)/1048576 FROM USER_SEGMENTS WHERE SEGMENT_NAME='SALES_12C_IND';

SUM(BYTES)/1048576
------------------
                32

We drop the index and create the same index, but this time as a partial index. Since the index has only been created on a few partitions of the table and not the entire table, it is half the size of the original index.

SQL> CREATE INDEX SALES_12C_IND ON SALES_12C (TIME_ID) LOCAL INDEXING PARTIAL;

Index created.

SQL> SELECT SUM(BYTES)/1048576 FROM USER_SEGMENTS WHERE SEGMENT_NAME='SALES_12C_IND';

SUM(BYTES)/1048576
------------------
                16

We can see that for the partitions where indexing is not enabled, the index has been created as UNUSABLE.

SQL> SELECT PARTITION_NAME,STATUS FROM USER_IND_PARTITIONS WHERE INDEX_NAME='SALES_12C_IND';

PARTITION_NAME                 STATUS
------------------------------ --------
SALES_2002                     USABLE
SALES_2001                     USABLE
SALES_2000                     USABLE
SALES_1999                     UNUSABLE
SALES_1998                     UNUSABLE
SALES_1997                     UNUSABLE
SALES_1996                     UNUSABLE
SALES_1995                     UNUSABLE

Note the difference in the EXPLAIN PLAN between two queries – which access different partitions of the same table and in one case use the local partial index and in the other case performs a full table scan.

SQL>  EXPLAIN PLAN FOR
  2  SELECT SUM(QUantity_sold) from sales_12c
  3  where time_id <'01-JAN-97'; Explained. SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2557626605

-------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name      | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |           |     1 |    11 |  1925   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |           |     1 |    11 |            |          |       |       |
|   2 |   PARTITION RANGE ITERATOR|           |   472 |  5192 |  1925   (1)| 00:00:01 |     1 |   KEY |
|*  3 |    TABLE ACCESS FULL      | SALES_12C |   472 |  5192 |  1925   (1)| 00:00:01 |     1 |   KEY |





SQL>  EXPLAIN PLAN FOR
  2   SELECT SUM(QUantity_sold) from sales_12c
  3  where time_id='01-JAN-97';

Explained.

SQL> select * from table(dbms_xplan.display);

PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------
Plan hash value: 2794067059

--------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                      | Name          | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                               |               |     1 |    22 |     2   (0)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                                |               |     1 |    22 |            |          |       |       |
|   2 |   VIEW                                         | VW_TE_2       |     2 |    26 |     2   (0)| 00:00:01 |       |       |
|   3 |    UNION-ALL                                   |               |       |       |            |          |       |       |
|*  4 |     FILTER                                     |               |       |       |            |          |       |       |
|   5 |      PARTITION RANGE SINGLE                    |               |     1 |    22 |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|   6 |       TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| SALES_12C     |     1 |    22 |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|*  7 |        INDEX RANGE SCAN                        | SALES_12C_IND |     1 |       |     1   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|*  8 |     FILTER                                     |               |       |       |            |          |       |       |
|   9 |      PARTITION RANGE SINGLE                    |               |     1 |    22 |     2   (0)| 00:00:01 |KEY(AP)|KEY(AP)|
|* 10 |       TABLE ACCESS FULL                        | SALES_12C     |     1 |    22 |     2   (0)| 00:00:01 |KEY(AP)|KEY(AP)|


--------------------------------------------------------------------------------------------------------------------------------

Note the new columns INDEXING and DEF_INDEXING in the data dictionary views

SQL> select def_indexing from user_part_tables where table_name='SALES_12C';

DEF
---
OFF


SQL> select indexing from user_indexes where index_name='SALES_12C_IND';

INDEXIN
-------
PARTIAL

Asynchronous Global Index Maintenance

In earlier versions operations like TRUNCATE or DROP PARTITION on even a single partition would render the global indexes unusable and would require the indexes to be rebuilt before the application could use the indexes.

Now when we issue the same DROP or TRUNCATE partition commands we can use the UPDATE INDEXES clause and this maintains the global indexes leaving them in a USABLE state.

The global index maintenance is now deferred and is performed by a DBMS_SCHEDULER job called SYS.PMO_DEFERRED_GIDX_MAINT_JOB which is scheduled to run at 2.00 AM on a daily basis.

We can also use the DBMS_PART package which has the CLEANUP_GIDX procedure which cleans up the global indexes.

A new column ORPHANED_ENTRIES in the DBA|USER|ALL_INDEXES view keeps a track of the global index and specifies if the global index partition contains any stale entries caused by the DROP/TRUNCATE PARTITION operation.

Let us look at an example of the same. Note the important point that the global index is left in a USABLE state even after we perform a TRUNCATE operation on the partitioned table.

SQL>  alter table sales_12c truncate partition SALES_2000 UPDATE INDEXES;

Table truncated.

SQL> select distinct status from user_ind_partitions;

STATUS
--------
USABLE


SQL> select partition_name, ORPHANED_ENTRIES from user_ind_partitions
  2  where index_name='SALES_GIDX';

PARTITION_NAME                 ORP
------------------------------ ---
SYS_P348                       YES
SYS_P347                       YES
SYS_P346                       YES
SYS_P345                       YES
SYS_P344                       YES
SYS_P343                       YES
SYS_P342                       YES
SYS_P341                       YES



SQL> exec dbms_part.cleanup_gidx('SH','SALES_12C');

PL/SQL procedure successfully completed.

SQL> select partition_name, ORPHANED_ENTRIES from user_ind_partitions
  2  where index_name='SALES_GIDX';

PARTITION_NAME                 ORP
------------------------------ ---
SYS_P341                       NO
SYS_P342                       NO
SYS_P343                       NO
SYS_P344                       NO
SYS_P345                       NO
SYS_P346                       NO
SYS_P347                       NO
SYS_P348                       NO

Viewing all 232 articles
Browse latest View live