Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

Installing the 12c Cloud Control plug-in for Microsoft SQL Server Database

$
0
0

 In an earlier post I had described how to use the Self Update feature of 12c Cloud Control to deploy the 12c management agent on a Windows 64 bit server.

Let us now see how to can deploy the 12c Plug-in on the same Windows server to enable us to monitor a Microsoft SQL Server 2008 environment running on the same Windows server.

 A plug-in is an additional component which can be plugged into an existing 12c Cloud Control installation in order to extend the default out-of-the-box management and monitoring capabilities.

 The plug-ins as well as updates to the plug-ins are released from time to time and these plug-ins enable us to monitor both Oracle as well as non-Oracle database and applications.

 Deploying a plug-in essentially consists of the following:

 a)     Download the plug-in archive and store it in the Software Library configured on the OMS host

b)    Deploy the plug-in first to the OMS host which manages and monitors the target pertaining to the particular plug-in type. In this example the plug-in is Microsoft SQL Server.

c)     Update the OMS repository with metadata information about the plug-in

d)    Deploy the plug-in on the Management Agent

 Note that in this example we will be using the Offline Patching mode to update the Software Library.

From the Setup menu, choose Extensibility, then select Plug-ins.

 

From the Actions drop-down menu, select Check Updates 

 

This will take up to the Self Update page. Note our Connection Mode is Offline

Click on Plug-In

 

 

The Plug-in Updates page will show us all the updates available for the various plug-ins. We can see that there is a newer version of the plug-in which is available for Microsoft SQL Server Database.

Select that line and click on Download

 

 

Since we using the Offline Patching mode, we need to download the file using the URL shown and then copy that file to the OMS host.

We will then use the EMCLI to run the command to import the plug-in metadata into the OMS rtepository.

 

 

From the OMS $ORACLE_HOME/bin run the following emcli commands:

[oracle@kens-oem-prod bin]$ ./emcli login -username=”sysman” -password=”xxx”
Login successful
[oracle@kens-oem-prod bin]$ ./emcli import_update -omslocal -file=”/u01/stage/p14047236_112000_Generic.zip”

Processing update: Plug-in – Microsoft SQL Server Plugin for monitoring SQL Server database from Enterprise Manager
Operation completed successfully. Update has been uploaded to Enterprise Manager. Please use the Self Update Home to manage this update.

 

We will now see that  the plug-in status has now changed to Downloaded from Available for Microsoft SQL Server Database

 

Click on Plug-in

 

Now click on Deploy On and select Management Servers…

 

The Microsoft SQL Server Database updated plug-in will now be deployed on the Management Server.

 

 

 

 

We can monitor the progress of the plug-in deployment on the Management Server – click on the Show Status button.

 

 

Once the deployment job is completed, we can see that the column “On Management Server” has been populated with the Microsoft SQL Server Database plug-in details.

 

 

We now have to deploy the plug-in on the Management Agent.

Click on Deploy On drop-down menu and select Management Agent …

 

 

 

 

 

 

Select the Management Agent where this plug-in needs to be deployed.

Click on the Add and select the Management Agent and its associated Operating System.

Click on Continue

 

Click on Next

 

 

Click on Deploy button

 

 

 

 


Click on Setup, Add Targets, Add Targets Manually

Select the option Add Non-Host Targets by Specifying Target Monitoring Properties

In the Target Type field, select Microsoft SQL Server.

In the Monitoring Agent field , select the agent deployed on the Windows server hosting the SQL Server instance to be monitored.


Here we will need to now enter the SQL Server credentials for the sa database user account as well as the TCP/IP Port number for SQL Server

Click on the Test Connection button

 

 

From the Targets menu, Select All Targets. We can now see the SQL Server target we configured earlier.

 

 

 


Performing a GoldenGate Initial Data Load using SQL Loader and BULKLOAD

$
0
0

Some time ago, I had posted a note on performing the initial data load for a GoldenGate capture and replication environment.

http://gavinsoorma.com/2010/02/oracle-goldengate-tutorial-4-performing-initial-data-load/

There are several ways of performing the initial data load with GoldenGate as well as from outside GoldenGate using say the Oracle export/import utility or Datapump facility.

In this case we shall look at using the Oracle database utility SQL Loader as well as the BULKLOAD extract parameter to perform the GG initial data load.

Note that in a production environment where we would not have the luxury of extended downtime for the initial load, we will configure addtional change synchronization extract and replicat processes and we would start those before performing the initial data load. That way whatever changes which are happening while the initial load is being performed are being captured as well and we can apply these changes in the target database via the replicat process after the initial data load in the target database has been completed.

In this method the initial load extract process will read the records from the source table directly (not from the redo logs or achived redo logs) and write them to the local or remote trail files in ASCII format.

These external ASCII files are then read by the Oracle SQL Loader utility to load records in the target database. Not only just SQL Loader, but these ASCII files can also be read by the BCP, DTS and SSIS SQL Server utilities as well.

On the target, the initial load Replicat process will create (and can also run) the control files used by the SQL Loader utility.
 

Loading data with a database utility

 Let us now look at a test case. We have a source table called LOAD_DATA with 50614 rows. The same table exists in the target database but at the moment does not have any records. We want to now load the 50614 rows from source to target table.
 

Source database

 SQL> select count(*) from load_data;

   COUNT(*)

----------

   50614

 Target database
 

SQL> select count(*) from load_data;

   COUNT(*)

----------

0

 Initial Load Extract parameter file
 

extract load3
SETENV (NLS_LANG = "AMERICAN_AMERICA.AL32UTF8")
userid ggs_owner, password ggs_owner
FORMATASCII, SQLLOADER
rmthost demora061rh, MGRPORT 7809
rmtfile ./dirdat/load_data.dat PURGE
TABLE SH.LOAD_DATA;

 

We now run the Extract process directly from the command line as follows:
 

[oracle@pdemora062rhv goldengate]$ ./extract paramfile /u01/app/goldengate/dirprm/load3.prm reportfile load3.rpt
 

If we view the report for the load3 extract process we can see that records have been written in ASCII format to the external file load_data.dat

 

2012-06-16 06:54:27  INFO    OGG-01478  Output file ./dirdat/load_data.dat is using format ASCII.

 2012-06-16 06:54:33  INFO    OGG-01226  Socket buffer size set to 27985 (flush size 27985).

 Processing table SH.LOAD_DATA

 ***********************************************************************

*                   ** Run Time Statistics **                         *

***********************************************************************

 Report at 2012-06-16 06:54:35 (activity since 2012-06-16 06:54:24)

 Output to ./dirdat/load_data.dat:

 From Table SH.LOAD_DATA:

       #                   inserts:     50614

       #                   updates:         0

       #                   deletes:         0

       #                  discards:         0

 

Target Initial Load Replicat parameter file
 

GENLOADFILES sqlldr.tpl
userid ggs_owner, password ggs_owner
extfile ./dirdat/load_data.dat
assumetargetdefs
map sh.load_data,target sh.load_data;

 

The GENLOADFILES parameter specifies the name of the template file which is going to be used to generate the control and run files which in this case is going to be used by SQL Loader.

The tenplate file for SQL Loader is sqlldr.tpl and this file can be found in the root folder of the GoldenGate software installation.

More information about the GENLOADFILES parameter can be found in the Oracle GoldenGate Windows and UNIX Reference Guide (Pages 223-226).

We now run the Replicat process directly from the command line as follows:
 

[oracle@pdemora061rhv goldengate]$ ./replicat paramfile /u01/app/goldengate/dirprm/load4.prm reportfile load4.rpt

If we view the report for the initial load Replicat process load4, we can see that the SQL Loader control file has been created.
 

….

File created for loader initiation: LOAD_DATA.run

File created for loader control:    LOAD_DATA.ctl

 Load files generated successfully.

If we look at the contents of the LOAD_DATA.run file, we find that it has all the required commands we need to load data using the SQL Loader utility.
 

[oracle@pdemora061rhv goldengate]$ cat LOAD_DATA.run

sqlldr userid=ggs_owner/ggs_owner control=LOAD_DATA log=LOAD_DATA direct=true

 

Let us check the contents of the SQL Loader control file which has been created.
 

[oracle@pdemora061rhv goldengate]$ cat LOAD_DATA.ctl

unrecoverable
load data
infile load_data.dat
truncate
into table LOAD_DATA
(
  OWNER                          position(4:33)
                                 defaultif (3)='Y'
, OBJECT_NAME                    position(35:64)
                                 defaultif (34)='Y'
)

 

I have edited the control file and inserted the full path of the location of the SQL Loader .dat file.  I have also qualified the table name with the schema name as well.

 

[oracle@pdemora061rhv goldengate]$ cat LOAD_DATA.ctl

unrecoverable
load data
infile '/u01/app/goldengate/dirdat/load_data.dat'
truncate
into table SH.LOAD_DATA
(
  OWNER                          position(4:33)
                                 defaultif (3)='Y'
, OBJECT_NAME                    position(35:64)
                                 defaultif (34)='Y'
)

 
We now execute the LOAD_DATA.run file.
 

[oracle@pdemora061rhv goldengate]$ ./LOAD_DATA.run

 SQL*Loader: Release 11.2.0.1.0 - Production on Sat Jun 16 07:05:01 2012

 Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

 Load completed - logical record count 50614.

 

Let us confirm this…
 

SQL> select count(*) from load_data;

   COUNT(*)

----------

50614

 

 Loading data using the BULKLOAD parameter

 
 
BULKLOAD directs the Replicat initial load process to communicate directly with the Oracle SQL*Loader interface and load data as a direct path bulk load operation.

The limitations of this method as that BULKLOAD is specific for the Oracle utility SQL Loader and cannot be used for other databases. Also, if the table has columns with LOB or LONG data, then BULKLOAD cannot be used.

Let us test this out using the same source and target tables which we used in the previous example.
 

Target database
 

SQL> truncate table load_data;

Table truncated.

SQL> select count(*) from load_data;

  COUNT(*)
----------
         0

On the Source GoldenGate environment these are the contents of the extract parameter file:
 

[oracle@pdemora062rhv dirprm]$  cat load1.prm
EXTRACT load1
USERID ggs_owner, PASSWORD ggs_owner
RMTHOST pdemora061rhv, MGRPORT 7809
RMTTASK replicat, GROUP load2
TABLE sh.load_data;

On the target GoldenGate environment these are the contents of the replicat parameter file:
 

[oracle@pdemora061rhv dirprm]$ cat load2.prm
REPLICAT load2
USERID ggs_owner, PASSWORD ggs_owner
BULKLOAD
ASSUMETARGETDEFS
MAP sh.load_data, TARGET sh.load_data;

 

On the source we now start the initial load extract process.
 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 1> start extract load1

Sending START request to MANAGER ...
EXTRACT LOAD1 starting

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 2> info extract load1

EXTRACT    LOAD1     Last Started 2012-06-15 06:21   Status STOPPED
Checkpoint Lag       Not Available
Log Read Checkpoint  Table SH.LOAD_DATA
                     2012-06-15 06:22:13  Record 50614
Task                 SOURCEISTABLE

 

On the target database we see that the rows form the source table have been inserted into the LOAD_DATA table.
 

SQL> select count(*) from load_data;

  COUNT(*)
----------
     50614

Using the FORMAT RELEASE parameter to handle GoldenGate Version Differences

$
0
0
Recently I encountered this error while doing a GoldenGate replication from an Oracle 11g R2 source database to an Oracle 10g R2 target database.

ERROR OGG-01389 File header failed to parse tokens. File /u01/app/goldengate/dirdat/gv000000, last offset 915, data: 0x 
Reading the GoldenGate documentation I came across this section which read:

“A trail or extract file must have a version that is equal to, or lower than, that of the process that reads it. Otherwise the process will abend.” 
The source GoldenGate environment in my case was 11.2.1.0.0 and the target GoldenGate environment was 11.1.1.1

So it looks like the extract file generated by GG version 11.2.1.0 was not compatible with the 11.1.1 replicat process which is reading the trail file.

To overcome this problem we had to use the parameter FORMAT RELEASE in the extract parameter file.

Quoting the documentation  :

FORMAT RELEASE <major>.<minor>
Specifies the metadata format of the data that is sent by Extract to a trail, a file, or (if a remote task) to another process. 
The metadata tells the reader process whether the data records are of a version that it supports. The metadata format depends on the version of the Oracle GoldenGate process.
So we want to change the format of the trail file to the (lower) GoldenGate version of the target environment. 

In the extract parameter file we have to add in this case the line:

rmttrail /u01/app/goldengate/dirdat/yy, format release 11.1

Test Case:

Source GoldenGate environment

./ggsci Oracle GoldenGate Command Interpreter for Oracle Version 11.2.1.0.0 OGGCORE_11.2.1.0.0_PLATFORMS_120131.1910_FBO

Target GoldenGate environment

./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.1.1.1.2 OGGCORE_11.1.1.1.2_PLATFORMS_111004.2100

Add the Extract 

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 4> add extract testext tranlog begin now
EXTRACT added.

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 5> add rmttrail /u01/app/goldengate/dirdat/gv extract testext
RMTTRAIL added.

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 8> view params testext

extract testext
userid ggs_owner, password ggs_owner
rmthost 10.32.206.62, mgrport 7809
rmttrail /u01/app/goldengate/dirdat/gv
table sh.gavin;

Add the Replicat
GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 4> add replicat testrep exttrail /u01/app/goldengate/dirdat/gv
REPLICAT added.

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 6> edit params testrep

REPLICAT testrep
ASSUMETARGETDEFS
USERID ggs_owner,PASSWORD ggs_owner
MAP SH.GAVIN, TARGET SH.GAVIN;

Now start the Extract and Replicat processes and note the status …. We see that the Replicat process has abended.
GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 12> info extract testext

EXTRACT    TESTEXT   Last Started 2012-06-21 04:27   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:03 ago)
Log Read Checkpoint  Oracle Redo Logs
                     2012-06-21 04:29:48  Seqno 219, RBA 27456512
                     SCN 0.5335461 (5335461)

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 8> info replicat testrep

REPLICAT   TESTREP   Last Started 2012-06-21 04:28   Status ABENDED
Checkpoint Lag       00:00:00 (updated 00:03:27 ago)
Log Read Checkpoint  File /u01/app/goldengate/dirdat/gv000000
                     First Record  RBA 0

This is the error we see in the Replicat report :

2012-06-21 04:28:29 ERROR OGG-01389 File header failed to parse tokens. File /u01/app/goldengate/dirdat/gv000000, last offset 915, data: 0x
Now stop the Extract process and add the FORMAT RELEASE parameter
GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 13> stop extract testext

Sending STOP request to EXTRACT TESTEXT ...
Request processed.

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 14> edit params testext

extract testext
userid ggs_owner, password ggs_owner
rmthost 10.32.206.62, mgrport 7809
rmttrail /u01/app/goldengate/dirdat/gv, format release 11.1
table sh.gavin;

We want the extract to now start writing to a new trail file in the older (11.1) format  compatible with the target GG version. So we use the ETROLLOVER command for this purpose.
GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 15> alter extract testext, etrollover

2012-06-21 04:32:40  INFO    OGG-01520  Rollover performed.  For each affected output trail of Version 10 or higher format, after starting the source extract,
issue ALTER EXTSEQNO for that trail's reader (either pump EXTRACT or REPLICAT) to move the reader's scan to the new trail file;  it will not happen automatically.
EXTRACT altered.

On the Target side, we want the Replicat process to read the new trail file when it starts up and not the one with the higher 11.2 version it was reading initially when it abended
GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 7> alter replicat testrep  extseqno 1
REPLICAT altered.

We now start the Replicat process and see that it is running fine and reading the next sequence in the trail file which is gv000001
GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 8> start replicat testrep

Sending START request to MANAGER ...
REPLICAT TESTREP starting

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 9> info replicat testrep

REPLICAT   TESTREP   Last Started 2012-06-21 04:41   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:01 ago)
Log Read Checkpoint  File /u01/app/goldengate/dirdat/gv000001
                     First Record  RBA 1049

Performing a GoldenGate Upgrade to version 11.2

$
0
0

In one of my earlier posts I had described how to handle GoldenGate version differences between the source and target environments. In that case my source was version 11.2 and the target was version 11.1.  We had to use the FORMAT RELEASE parameter to handle such a version difference.

 

http://gavinsoorma.com/2012/06/using-the-format-release-parameter-to-handle-goldengate-version-differences/

 

Let us now look at an example of how to upgrade the existing  target 11.1 environment to GoldenGate version 11.2.1.0.

Note – in my case, the source was already running on version 11.2 and we only had to upgrade the target from 11.1 to 11.2

But the same process should apply if we are upgrading both the source as well as target GoldenGate environments.

 

Take a backup of existing GoldenGate 11.1 software directory

[oracle@pdemora062rhv app]$ cp -fR goldengate goldengate_11.1

 Create a new directory for GG 11.2 software

 [oracle@pdemora062rhv app]$ mkdir goldengate_11.2

 

 Check the extract status

 

Ensure that all our extract processes on the source have been completed.

In GGSCI on the source system, issue the SEND EXTRACT command with the LOGEND option until it shows there is no more redo data to capture.

In this case we see that some transactions are still bein processed:

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 9> send extract testext logend

 Sending LOGEND request to EXTRACT TESTEXT …

We run the same command again and we now see that the extract process has finished processing records.

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 11> send extract testext logend

 Sending LOGEND request to EXTRACT TESTEXT …

YES.

 

A good practice is to make a note of the redo log file currently being read from. You may need archive logs from this point if you receive any error on extract startup after upgrade.

 

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 10> send extract testext showtrans

 Sending SHOWTRANS request to EXTRACT TESTEXT …

No transactions found

 Oldest redo log file necessary to restart Extract is:

 Redo Log Sequence Number 235, RBA 10052624.

 

On the target site, check the Replicat status.

Ensure that all the Replicat groups have completed processing. In the case below, we see that one of the replicat processes is still active and not yet complete.

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 4> send replicat testrep status

 Sending STATUS request to REPLICAT TESTREP …

  Current status: Processing data

  Sequence #: 1

  RBA: 4722168

  39491 records in current transaction

 

Now we see that the process has completed …

 

 GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 5>  send replicat testrep status

 Sending STATUS request to REPLICAT TESTREP …

  Current status: At EOF

  Sequence #: 1

  RBA: 6927789

  0 records in current transaction

 

Stop the extract and manager processes on source

 

 GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 12> stop extract testext

 Sending STOP request to EXTRACT TESTEXT …

Request processed.

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 13> stop manager

Manager process is required by other GGS processes.

Are you sure you want to stop it (y/n)? y

 Sending STOP request to MANAGER …

Request processed.

Manager stopped.

 

 Stop the replicat and manager processes on target

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 6> stop replicat testrep

 Sending STOP request to REPLICAT TESTREP …

Request processed.

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 7> stop manager

Manager process is required by other GGS processes.

Are you sure you want to stop it (y/n)? y

 Sending STOP request to MANAGER …

Request processed.

Manager stopped.

 

Note – if upgrading the source and if have configured DDL support, we need to disable the DDL trigger by running the ddl_disable script from the Oracle GoldenGate directory on the source system.

 

SQL> conn sys as sysdba

Enter password:

Connected.

SQL> @ddl_disable

 Trigger altered.

 

Now unzip the 11.2 GoldenGate software.

 

[oracle@pdemora062rhv goldengate]$ cd ../goldengate_11.2

[oracle@pdemora062rhv goldengate_11.2]$ ls

V32400-01.zip

[oracle@pdemora062rhv goldengate_11.2]$ unzip *.zip

Archive:  V32400-01.zip

  inflating: fbo_ggs_Linux_x64_ora10g_64bit.tar

   inflating: OGG_WinUnix_Rel_Notes_11.2.1.0.1.pdf

  inflating: Oracle GoldenGate 11.2.1.0.1 README.txt

  inflating: Oracle GoldenGate 11.2.1.0.1 README.doc

 

[oracle@pdemora062rhv goldengate_11.2]$ tar -xvf fbo_ggs_Linux_x64_ora10g_64bit.tar

 

Now copy the contents of the unzipped 11.2 directory to the existing 11.1 GoldenGate software location.

Note – we are not touching our other existing 11.1 GG sub-directories like dirprm and dirdat. They still have our 11.1 version files.

 

[oracle@pdemora062rhv goldengate_11.2]$ cp -fR * /u01/app/goldengate

 

Let us now test the GoldenGate software version. Note the warning pertains to the parameter ENABLEMONITORAGENT which we had enabled for GoldenGate Monitor is now deprecated in version 11.2. We can sort that out later.

So we can see that we are now using the version 11.2.1.0 GoldenGate software binaries.

 

[oracle@pdemora062rhv goldengate]$ ./ggsci

 2012-06-23 06:41:17  WARNING OGG-00254  ENABLEMONITORAGENT is a deprecated parameter.

 Oracle GoldenGate Command Interpreter for Oracle

Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230_FBO

Linux, x64, 64bit (optimized), Oracle 10g on Apr 23 2012 07:30:46

 

If we are upgrading the Target GoldenGate environment which is in my case, we have also to go and upgrade the Checkpoint Table as the 11.2 table structure is slightly different to the 11.1 structure.

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 1> dblogin userid ggs_owner, password ggs_owner

Successfully logged into database.

 GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 2> upgrade checkpointtable ggs_owner.chkptab

 Successfully upgraded checkpoint table ggs_owner.chkptab.

 

Note:

This portion will now apply if we are upgrading the source GoldenGate environment and we would like to upgrade the DDL support as well to the 11.2 version.

 

SQL> conn sys as sysdba
Enter password:
Connected.

SQL> @ddl_disable

Trigger altered.
SQL> @ddl_remove

DDL replication removal script.
WARNING: this script removes all DDL replication objects and data.

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.

Enter Oracle GoldenGate schema name:GGS_OWNER
Working, please wait ...
Spooling to file ddl_remove_spool.txt

Script complete.

SQL> @marker_remove

Marker removal script.
WARNING: this script removes all marker objects and data.

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.

Enter Oracle GoldenGate schema name:GGS_OWNER

PL/SQL procedure successfully completed.
Sequence dropped.
Table dropped.
Script complete.
SQL> @marker_setup

Marker setup script

You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:GGS_OWNER
Marker setup table script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGS_OWNER

MARKER TABLE
-------------------------------
OK

MARKER SEQUENCE
-------------------------------
OK

Script complete.
SQL>
SQL> @ddl_setup

Oracle GoldenGate DDL Replication setup script

Verifying that current user has privileges to install DDL Replication...

You will be prompted for the name of a schema for the Oracle GoldenGate database                                                                                                                                                              objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Ora                                                                                                                                                             cle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter Oracle GoldenGate schema name:GGS_OWNER

Working, please wait ...
Spooling to file ddl_setup_spool.txt

Checking for sessions that are holding locks on Oracle Golden Gate metadata tabl                                                                                                                                                             es ...

Check complete.

Using GGS_OWNER as a Oracle GoldenGate schema name.

Working, please wait ...

RECYCLEBIN must be empty.
This installation will purge RECYCLEBIN for all users.
To proceed, enter yes. To stop installation, enter no.

Enter yes or no:yes
DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to GGS_OWNER

CLEAR_TRACE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

CREATE_TRACE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

TRACE_PUT_LINE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

INITIAL_SETUP STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDLVERSIONSPECIFIC PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDLREPLICATION PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDLREPLICATION PACKAGE BODY STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDL IGNORE TABLE
-----------------------------------
OK

DDL IGNORE LOG TABLE
-----------------------------------
OK

DDLAUX  PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDLAUX PACKAGE BODY STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

SYS.DDLCTXINFO  PACKAGE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

SYS.DDLCTXINFO  PACKAGE BODY STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDL HISTORY TABLE
-----------------------------------
OK

DDL HISTORY TABLE(1)
-----------------------------------
OK

DDL DUMP TABLES
-----------------------------------
OK

DDL DUMP COLUMNS
-----------------------------------
OK

DDL DUMP LOG GROUPS
-----------------------------------
OK

DDL DUMP PARTITIONS
-----------------------------------
OK

DDL DUMP PRIMARY KEYS
-----------------------------------
OK

DDL SEQUENCE
-----------------------------------
OK

GGS_TEMP_COLS
-----------------------------------
OK

GGS_TEMP_UK
-----------------------------------
OK

DDL TRIGGER CODE STATUS:

Line/pos                                 Error
---------------------------------------- ---------------------------------------                                                                                                                                                             --------------------------
No errors                                No errors

DDL TRIGGER INSTALL STATUS
-----------------------------------
OK

DDL TRIGGER RUNNING STATUS
--------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
ENABLED

STAYMETADATA IN TRIGGER
--------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
OFF

DDL TRIGGER SQL TRACING
--------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
0

DDL TRIGGER TRACE LEVEL
--------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
0

LOCATION OF DDL TRACE FILE
--------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
/u01/app/oracle/oracle/product/10.2.0/db_1/admin/db10g/udump/ggs_ddl_trace.log

Analyzing installation status...
STATUS OF DDL REPLICATION
--------------------------------------------------------------------------------                                                                                                                                                             ----------------------------------------
SUCCESSFUL installation of DDL Replication software components

Script complete.

SQL> @role_setup

GGS Role setup script

This script will drop and recreate the role GGS_GGSUSER_ROLE
To use a different role name, quit this script and then edit the params.sql script to change the gg_role parameter to the preferred name. (Do not run the script.)

You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.

Enter GoldenGate schema name:GGS_OWNER
Wrote file role_setup_set.txt

PL/SQL procedure successfully completed.
Role setup script complete

Grant this role to each user assigned to the Extract, GGSCI, and Manager processes, by using the following SQL command:

GRANT GGS_GGSUSER_ROLE TO <loggedUser>

where <loggedUser> is the user assigned to the GoldenGate processes.
SQL> GRANT GGS_GGSUSER_ROLE TO ggs_owner;

Grant succeeded.

SQL> @ddl_enable

Trigger altered.

 

We now start the Manager on both source as well as target.

In my example my earlier configuration was source 11.2 and target 11.1,

So I had to use the FORMAT RELEASE 11.1 parameter in my extract parameter file to handle the version difference.

But now both source as well as target are GoldenGate version 11.2, so I can remove the FORMAT RELEASE from my extract parameter file.

I also want the extract to start writing to a new Trail file and use 11.2 version instead as Replicat now is also on the same 11.2 version. I use the ETROLLOVER to force a new trail file sequence and then start the extract process.

 

GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 21> alter extract testext etrollover

 

We also have to instruct the Replicat process that it needs to start reading from a new 11.2 version trail file via the ALTER REPLICAT command. After that we can start the replicat process.

 

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 8> alter replicat testrep extseqno 3

REPLICAT altered.

 

 GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 9>  start replicat testrep

 Sending START request to MANAGER …

REPLICAT TESTREP starting

 

 

Upgrading 11gR2 RAC Grid Infrastructure to 11.2.0.3

$
0
0

 

Here are some notes I have prepared when I did an upgrade of a 11GR2 Two-Node RAC Grid Infrastructure from 11.2.0.2 to 11.2.0.3 on a Linux 64 bit test environment.

 

The 11.2.0.3 is an out of place upgrade. So we need to install the 11.2.0.3 software in seperate location from the existing 11.2.0.x software.

 

The 11.2.0.3 software is not directly available from OTN. We need to download 11g Release 2 (11.2.0.3) Patch Set 2 (Patch 10404530) from the Metalink MOS site.

The patch 10404530 comes with 7 seperate zip files. We do not need all the zip files and can just download a sub-set.

For Grid Infrastructure 11.2.0.3, we need to use p10404530_112030_platform_3of7.zip

For Database we need to download p10404530_112030_platform_1of7.zip & p10404530_112030_platform_2of7.zip

 

You need to use the most latest version of  OPatch. We need OPatch utility version 11.2.0.1.5 or later to apply this patch. Download latest version of patch 6880880 appropriate to your platform.

I installed OPatch version 11.2.0.3 on BOTH nodes of the cluster.

We need to install a prerequisite patch 12539000 and follow the README.txt file specifically.

 

To run the 12539000 patch, we need to take quite a few points into consideration.

 

We have to run the emocmrsp script to create the OCM (Oracle Configuration Manager) response file. This file is located under the Grid Oracle Home in the following location:

<GRID ORACLE_HOME>/OPatch/ocm/bin/emocmrsp

When we run the emocmrsp file, we do not need to enter our email address details and can just specify Y when asked if we wish to remain uninformed of  any patch updates related to security issues.

We have to specify the location of the OCM (Oracle Configuration Manager) response file. We need to specify the FULL PATH of the ocm.rsp file or esle the OPatch will fail.

For example – /u01/app/11.2.0/grid/OPatch/ocm.rsp

The directory where we unzip the patch 12539000 needs to EMPTY. It should not contain any other files or else OPatch will fail with some real misleading errors.

To run the patch 12539000, we need to do the following on BOTH nodes of the RAC cluster assuming the patch has been unzipped in the directory – /u01/stage/GI_11203_PATCH

cd $GRID_HOME/OPatch

./opatch auto /u01/stage/GI_11203_PATCH

Note – this should patch the Database Oracle Homes as well.

After the patch installation , run the opatch lsinvemtory command to confirm that the patch 12539000 has been applied on both database as well as Grid Infrastructure Oracle Homes.

 

For example:

[grid@kens-racnode1 OPatch]$ ./opatch lsinventory

Oracle Interim Patch Installer version 11.2.0.3.0

Copyright (c) 2012, Oracle Corporation.  All rights reserved.

Oracle Home       : /u01/app/11.2.0/grid

Central Inventory : /u01/app/oraInventory

from           : /u01/app/11.2.0/grid/oraInst.loc

OPatch version    : 11.2.0.3.0

OUI version       : 11.2.0.2.0

Log file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/opatch2012-06-29_06-05-49AM_1.log

Lsinventory Output file location : /u01/app/11.2.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2012-06-29_06-05-49AM.txt

——————————————————————————–

Installed Top-level Products (1):

Oracle Grid Infrastructure                                           11.2.0.2.0

There are 1 products installed in this Oracle Home.

Interim patches (1) :

 

Patch  12539000     : applied on Fri Jun 29 06:02:12 EDT 2012

Unique Patch ID:  13976979

Created on 28 Jul 2011, 12:37:42 hrs PST8PDT

Bugs fixed:

12539000

 

Rac system comprising of multiple nodes

Local node = kens-racnode1

Remote node = kens-racnode2

——————————————————————————–

OPatch succeeded.

 

 

 

 

 

 

 

 

 

 

 

 


 

 

 

 

 

 

 

 


 

 

 

 

 

 


 

 

 

[root@kens-racnode1 12539000]# /u01/app/11.2.0.3/grid/rootupgrade.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME=  /u01/app/11.2.0.3/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of “dbhome” have not changed. No need to overwrite.

The contents of “oraenv” have not changed. No need to overwrite.

The contents of “coraenv” have not changed. No need to overwrite.

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

 

ASM upgrade has started on first node.

 

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.crsd’ on ‘kens-racnode1′

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.registry.acfs’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.cvu’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.kens-racnode1.vip’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.kens-racnode1.vip’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.kens-racnode1.vip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.scan2.vip’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.scan2.vip’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.scan2.vip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.scan3.vip’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.scan3.vip’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.scan3.vip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.cvu’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.cvu’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.registry.acfs’ on ‘kens-racnode1′ succeeded

CRS-2676: Start of ‘ora.cvu’ on ‘kens-racnode2′ succeeded

CRS-2676: Start of ‘ora.kens-racnode1.vip’ on ‘kens-racnode2′ succeeded

CRS-2676: Start of ‘ora.scan2.vip’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode2′

CRS-2676: Start of ‘ora.scan3.vip’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode2′

CRS-2676: Start of ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode2′ succeeded

CRS-2676: Start of ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.oc4j’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.oc4j’ on ‘kens-racnode2′

CRS-2676: Start of ‘ora.oc4j’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.DATA.dg’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.asm’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.asm’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.ons’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.ons’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.net1.network’ on ‘kens-racnode1′ succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘kens-racnode1′ has completed

CRS-2677: Stop of ‘ora.crsd’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.evmd’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.asm’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.asm’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.evmd’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.mdnsd’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.ctssd’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.cssd’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.cssd’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘kens-racnode1′

CRS-2673: Attempting to stop ‘ora.crf’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.crf’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.diskmon’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.gipcd’ on ‘kens-racnode1′ succeeded

CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.gpnpd’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘kens-racnode1′ succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘kens-racnode1′ has completed

CRS-4133: Oracle High Availability Services has been stopped.

OLR initialization – successful

Replacing Clusterware entries in inittab

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

Configure Oracle Grid Infrastructure for a Cluster … succeeded

[root@kens-racnode1 12539000]#

 

 

NODE 2

 

[root@kens-racnode2 OPatch]# /u01/app/11.2.0.3/grid/rootupgrade.sh

Performing root user operation for Oracle 11g

 

The following environment variables are set as:

ORACLE_OWNER= grid

ORACLE_HOME=  /u01/app/11.2.0.3/grid

 

Enter the full pathname of the local bin directory: [/usr/local/bin]:

The contents of “dbhome” have not changed. No need to overwrite.

The contents of “oraenv” have not changed. No need to overwrite.

The contents of “coraenv” have not changed. No need to overwrite.

 

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /u01/app/11.2.0.3/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.crsd’ on ‘kens-racnode2′

CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.registry.acfs’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.DATA.dg’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.LISTENER.lsnr’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.oc4j’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.cvu’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.scan3.vip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.scan3.vip’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.scan3.vip’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.cvu’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.cvu’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.LISTENER.lsnr’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.kens-racnode2.vip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.scan2.vip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.kens-racnode2.vip’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.kens-racnode2.vip’ on ‘kens-racnode1′

CRS-2677: Stop of ‘ora.scan2.vip’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.scan2.vip’ on ‘kens-racnode1′

CRS-2676: Start of ‘ora.cvu’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.registry.acfs’ on ‘kens-racnode2′ succeeded

CRS-2676: Start of ‘ora.scan3.vip’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode1′

CRS-2676: Start of ‘ora.scan2.vip’ on ‘kens-racnode1′ succeeded

CRS-2676: Start of ‘ora.kens-racnode2.vip’ on ‘kens-racnode1′ succeeded

CRS-2672: Attempting to start ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode1′

CRS-2676: Start of ‘ora.LISTENER_SCAN3.lsnr’ on ‘kens-racnode1′ succeeded

CRS-2676: Start of ‘ora.LISTENER_SCAN2.lsnr’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.oc4j’ on ‘kens-racnode2′ succeeded

CRS-2672: Attempting to start ‘ora.oc4j’ on ‘kens-racnode1′

CRS-2676: Start of ‘ora.oc4j’ on ‘kens-racnode1′ succeeded

CRS-2677: Stop of ‘ora.DATA.dg’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.asm’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.asm’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.ons’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.ons’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.net1.network’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.net1.network’ on ‘kens-racnode2′ succeeded

CRS-2792: Shutdown of Cluster Ready Services-managed resources on ‘kens-racnode2′ has completed

CRS-2677: Stop of ‘ora.crsd’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.ctssd’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.evmd’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.asm’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.drivers.acfs’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.asm’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.cluster_interconnect.haip’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.evmd’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.mdnsd’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.cluster_interconnect.haip’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.ctssd’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.cssd’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.cssd’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.diskmon’ on ‘kens-racnode2′

CRS-2673: Attempting to stop ‘ora.crf’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.crf’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.diskmon’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.gipcd’ on ‘kens-racnode2′ succeeded

CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘kens-racnode2′

CRS-2677: Stop of ‘ora.drivers.acfs’ on ‘kens-racnode2′ succeeded

CRS-2677: Stop of ‘ora.gpnpd’ on ‘kens-racnode2′ succeeded

CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘kens-racnode2′ has completed

CRS-4133: Oracle High Availability Services has been stopped.

OLR initialization – successful

Replacing Clusterware entries in inittab

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user ‘root’, privgrp ‘root’..

Operation successful.

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.

Started to upgrade the CSS.

Started to upgrade the CRS.

The CRS was successfully upgraded.

Oracle Clusterware operating version was successfully set to 11.2.0.3.0

 

ASM upgrade has finished on last node.

 

PRKO-2116 : OC4J is already enabled

Configure Oracle Grid Infrastructure for a Cluster … succeeded

 

TEST

 

We now see that the clusterware processes are now running from the 11.2.0.3  Grid Infrastructure Oracle Home.

We can also use the various crsctl query crs commands to confirm the upgraded software version.

 

[root@kens-racnode1 12539000]# ps -ef |grep css

root     19784     1  0 06:50 ?        00:00:00 /u01/app/11.2.0.3/grid/bin/cssdmonitor

root     19807     1  0 06:50 ?        00:00:00 /u01/app/11.2.0.3/grid/bin/cssdagent

grid     19823     1  0 06:50 ?        00:00:02 /u01/app/11.2.0.3/grid/bin/ocssd.bin

root     26134 13158  0 07:05 pts/0    00:00:00 grep css

 

 

[grid@kens-racnode1 ~]$ crsctl query crs activeversion

Oracle Clusterware active version on the cluster is [11.2.0.3.0]

 

[grid@kens-racnode1 ~]$ crsctl query crs releaseversion

Oracle High Availability Services release version on the local node is [11.2.0.3.0]

 

[grid@kens-racnode1 ~]$ crsctl query crs softwareversion

Oracle Clusterware version on node [kens-racnode1] is [11.2.0.3.0]

 

[grid@kens-racnode1 ~]$ crsctl query crs softwareversion kens-racnode2

Oracle Clusterware version on node [kens-racnode2] is [11.2.0.3.0]

 

 

 

 

Diagnosing and Repairing Failures with 11g Data Recovery Advisor

$
0
0

The 11g Data Recovery Advisor is part of the 11g database health checking framework and diagnoses persistent data failures and not only presents options to repair and fix the problem but also can execute the repair and recovery process at our request.

The Repair Advisor can take away lot of the stress associated with peforming backup and recovery by diagnosing what is wrong as well as presenting us with the syntax as well to execute the commands to restore and recover as the case may be. Under pressure, everyone can make mistakes and it is comforting to know that there is a tool which can really he;p the DBA.

The Data Recovery Advisor can be used via OEM Database or Grid Control or via the RMAN command line interface.

Let us look at an example of using the RMAN Data Recovery Advisor to recover from a loss of control files situation with and without the CONTROL AUTOBACKUP option being enabled.

Note, that when there is no control file autobackup, the RMAN Repair Advisor is not able to do the full automated recovery for us and we use a combination of automatic and manual repair to fix the problem.

 

Scenario is loss of control files – AUTOBACKUP is enabled

 
RMAN> list failure;

using target database control file instead of recovery catalog
List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
5652 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl is missing
5649 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/oradata/sqlfun/control01.ctl is missing

RMAN> advise failure;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
5304 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl is missing

analyzing automatic repair options; this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=133 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
no manual actions available

Automated Repair Options
========================
Option Repair Description
—— ——————
1 Use a multiplexed copy to restore control file /u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/sqlfun/sqlfun/hm/reco_632546057.hm

RMAN> repair failure preview;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/sqlfun/sqlfun/hm/reco_632546057.hm

contents of repair script:
# restore control file using multiplexed copy
restore controlfile from ‘/u01/app/oracle/oradata/sqlfun/control01.ctl’;
sql ‘alter database mount’;

RMAN> repair failure;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/sqlfun/sqlfun/hm/reco_632546057.hm

contents of repair script:
# restore control file using multiplexed copy
restore controlfile from ‘/u01/app/oracle/oradata/sqlfun/control01.ctl’;
sql ‘alter database mount’;

Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script

Starting restore at 18-JUN-12
using channel ORA_DISK_1

channel ORA_DISK_1: copied control file copy
output file name=/u01/app/oracle/oradata/sqlfun/control01.ctl
output file name=/u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl
Finished restore at 18-JUN-12

sql statement: alter database mount
released channel: ORA_DISK_1
repair failure complete

Do you want to open the database (enter YES or NO)? YES
database opened

RMAN>

RMAN> list failure;

no failures found that match specification

 

Scenario is loss of control files – No AUTOBACKUP

 
RMAN> list failure;

using target database control file instead of recovery catalog
List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
5652 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl is missing
5649 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/oradata/sqlfun/control01.ctl is missing

RMAN> advise failure;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
5652 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl is missing
5649 CRITICAL OPEN 18-JUN-12 Control file /u01/app/oracle/oradata/sqlfun/control01.ctl is missing

analyzing automatic repair options; this may take some time
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=135 device type=DISK
analyzing automatic repair options complete

Mandatory Manual Actions
========================
1. If file /u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl was unintentionally renamed or moved, restore it
2. If file /u01/app/oracle/oradata/sqlfun/control01.ctl was unintentionally renamed or moved, restore it
3. If you have a CREATE CONTROLFILE script, use it to create a new control file
4. Contact Oracle Support Services if the preceding recommendations cannot be used, or if they do not fix the failures selected for repair

Optional Manual Actions
=======================
1. If a standby database is available, then perform a Data Guard failover initiated from the standby

Automated Repair Options
========================
no automatic repair options available

RMAN> repair failure preview;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of repair command at 06/18/2012 11:00:06
RMAN-06953: no automatic repairs were listed by ADVISE FAILURE

Find the last database backup of control file in FRA

RMAN> restore controlfile from ‘/u01/app/oracle/flash_recovery_area/SQLFUN/autobackup/2012_06_18/o1_mf_s_786251074_7xwbl3l4_.bkp’;

Starting restore at 18-JUN-12
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=125 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:02
output file name=/u01/app/oracle/oradata/sqlfun/control01.ctl
output file name=/u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl
Finished restore at 18-JUN-12

RMAN> list failure;

no failures found that match specification

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RMAN> alter database open resetlogs;

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of alter db command at 06/18/2012 11:36:01
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: ‘/u01/app/oracle/oradata/sqlfun/system01.dbf’

RMAN> list failure;

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
5898 CRITICAL OPEN 18-JUN-12 System datafile 1: ‘/u01/app/oracle/oradata/sqlfun/system01.dbf’ needs media recovery
5895 CRITICAL OPEN 18-JUN-12 Control file needs media recovery
8 HIGH OPEN 18-JUN-12 One or more non-system datafiles need media recovery

RMAN> advise failure;

Starting implicit crosscheck backup at 18-JUN-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=125 device type=DISK
Crosschecked 3 objects
Finished implicit crosscheck backup at 18-JUN-12

Starting implicit crosscheck copy at 18-JUN-12
using channel ORA_DISK_1
Crosschecked 5 objects
Finished implicit crosscheck copy at 18-JUN-12

searching for all files in the recovery area
cataloging files…
cataloging done

List of Cataloged Files
=======================
File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/backupset/2012_06_18/o1_mf_ncsnf_TAG20120618T112825_7xx83b9m_.bkp
File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/autobackup.old/2012_06_18/o1_mf_s_786277031_7xx3x7pw_.bkp.old
File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/autobackup.old/2012_06_18/o1_mf_s_786251074_7xwbl3l4_.bkp
File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/autobackup.old/2012_06_18/o1_mf_s_786281256_7xx8184v_.bkp
File Name: /u01/app/oracle/flash_recovery_area/SQLFUN/autobackup.old/2012_06_18/o1_mf_s_786279412_7xx67nlv_.bkp

List of Database Failures
=========================

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
5898 CRITICAL OPEN 18-JUN-12 System datafile 1: ‘/u01/app/oracle/oradata/sqlfun/system01.dbf’ needs media recovery
5895 CRITICAL OPEN 18-JUN-12 Control file needs media recovery
8 HIGH OPEN 18-JUN-12 One or more non-system datafiles need media recovery

analyzing automatic repair options; this may take some time
using channel ORA_DISK_1
analyzing automatic repair options complete

Mandatory Manual Actions
========================
no manual actions available

Optional Manual Actions
=======================
1. If you have the correct version of the control file, then shutdown the database and replace the old control file
2. If you restored the wrong version of data file /u01/app/oracle/oradata/sqlfun/system01.dbf, then replace it with the correct one
3. If you restored the wrong version of data file /u01/app/oracle/oradata/sqlfun/sysaux01.dbf, then replace it with the correct one
4. If you restored the wrong version of data file /u01/app/oracle/oradata/sqlfun/undotbs01.dbf, then replace it with the correct one
5. If you restored the wrong version of data file /u01/app/oracle/oradata/sqlfun/users01.dbf, then replace it with the correct one
6. If you restored the wrong version of data file /u01/app/oracle/oradata/sqlfun/threatened_fauna_data.dbf, then replace it with the correct one

Automated Repair Options
========================
Option Repair Description
—— ——————
1 Recover database
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/sqlfun/sqlfun/hm/reco_2266295139.hm

RMAN> repair failure preview;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/sqlfun/sqlfun/hm/reco_2266295139.hm

contents of repair script:
# recover database
recover database;
alter database open resetlogs;

RMAN> repair failure noprompt;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/sqlfun/sqlfun/hm/reco_2266295139.hm

contents of repair script:
# recover database
recover database;
alter database open resetlogs;
executing repair script

Starting recover at 18-JUN-12
using channel ORA_DISK_1

starting media recovery

archived log for thread 1 with sequence 1 is already on disk as file /u01/app/oracle/oradata/sqlfun/redo01.log
archived log file name=/u01/app/oracle/oradata/sqlfun/redo01.log thread=1 sequence=1
media recovery complete, elapsed time: 00:00:01
Finished recover at 18-JUN-12

database opened
repair failure complete

RMAN>

GoldenGate IGNOREDELETES, IGNOREUPDATES and using the LOGDUMP utility

$
0
0

Some time back I was asked the question as to how do we use GoldenGate in a situation where on the target database we only want to capture records inserted in the source database and ignore any updates being made to existing rows in the source database.

For this we can use the IGNOREUPDATES parameter which is valid for both the Extract as well as Replicat parameter files to inform GoldenGate to selectively ignore any update operations. This parameter is table specific and will apply to all tables mentioned in the subsequent TABLE or MAP statements until the GETUPDATES parameter is used. Note that GETUPDATES is the default. 

In this example we will also see how delete operations on source database are ignored using the IGNOREDELETES parameter.

Let us create a simple table on both source as well as target database with the following structure:

 

SQL> create table mytab
  2  (id number, comments varchar2(20));

Table created.

SQL> alter table mytab add constraint pk_mytab  primary key (id);

Table altered.

 

We then create the extract process Testext on source and replicat process Testrep on target.

This is our Extract parameter file:

extract testext
userid ggs_owner, password ggs_owner
rmthost 10.32.20.62, mgrport 7809
rmttrail /u01/app/goldengate/dirdat/gg
table sh.mytab;

 

This is our Replicat parameter file:

REPLICAT testrep
ASSUMETARGETDEFS
USERID ggs_owner,PASSWORD ggs_owner
IGNOREDELETES
IGNOREUPDATES
MAP SH.MYTAB, TARGET SH.MYTAB;

 

Let us now test the same by inserting a row into the source table

 

SQL> insert into mytab
  2   values
  3  (1,’INSERTED row’);

1 row created.

SQL> commit;

 

Then check the target table for the inserted row.

 

SQL> select * from mytab;

        ID COMMENTS
———- ——————–
         1 INSERTED row

 

We now go and update the existing row on the target.

 

SQL> update mytab
  2  set comments=’UPDATED row’
  3  where id=1;

1 row updated.

SQL> commit;

SQL>  select * from mytab;

        ID COMMENTS
———- ——————–
              1 UPDATED row

 

On the target, we see that the update to the row has not happened on the target database.

 

SQL> select * from mytab;

        ID COMMENTS
———- ——————–
         1 INSERTED row

 

Let us now delete the existing record on the source database.

 

SQL> delete mytab;

1 row deleted.

SQL> commit;

Commit complete.

 

Check the target. We see that the row has not been deleted from the target database.

 

SQL> select * from mytab;

        ID COMMENTS
———- ——————–
         1 INSERTED row

 

On the source GoldenGate environment let us examine the statistics for the Extract process. We see that 3 operations have happened.  This is made up of one insert, update and delete operation.

 

 GGSCI (pdemora061rhv.asgdemo.asggroup.com.au) 66> stats extract testext

Sending STATS request to EXTRACT TESTEXT …

Start of Statistics at 2012-07-21 04:51:36.

Output to /u01/app/goldengate/dirdat/gg:

Extracting from SH.MYTAB to SH.MYTAB:

*** Total statistics since 2012-07-21 04:48:33 ***
        Total inserts                                      1.00
        Total updates                                      1.00
        Total deletes                                      1.00
        Total discards                                     0.00
        Total operations                                   3.00

 

On the target however we see that only one single Insert operation has taken place.

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 27> stats replicat testrep

Sending STATS request to REPLICAT TESTREP …

Start of Statistics at 2012-07-21 04:52:35.

Replicating from SH.MYTAB to SH.MYTAB:

*** Total statistics since 2012-07-21 04:48:29 ***
        Total inserts                                      1.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   1.00

 

Ok. Now the source table has no rows while the target table has one row.

What happens when we insert two rows into the source table?

 

SQL>  insert into mytab
  2   values
  3  (1,’INSERTED row’);

1 row created.

SQL>  insert into mytab
  2   values
  3   (2,’INSERTED row’);

1 row created.

SQL> commit;

Commit complete.

 

Since the row with ID=1 already existed in the target database (because it was not deleted when the delete happened on the source), the subsequent insert fails and we see this error in the replicat log file.

 

2012-07-21 04:54:17  WARNING OGG-00869  OCI Error ORA-00001: unique constraint (SH.PK_MYTAB) violated (status = 1). INSERT INTO “SH”.”MYTAB” (“ID”,”COMMENTS”) VALUES (:a0,:a1).

2012-07-21 04:54:17  WARNING OGG-01004  Aborted grouped transaction on ‘SH.MYTAB’, Database error 1 (OCI Error ORA-00001: unique constraint (SH.PK_MYTAB) violated (status = 1).

 

We need to tell the Replicat process that it needs to ignore the insert for the row which already exists and for this purpose we use the GoldenGate utility Logdump to examine the contents of the trail files.

We then find the RBA (Relative Byte Address) for the second insert (ID=2) and will use that RBA to tell the Replicat process to start processing not from the beginning of the trail but from a point in the trail file indicated by the RBA value which we will provide to the ALTER REPLICAT command.

 We navigate through the trail file using the ‘n’ command until we find the record where the ID=2.

We can see the first INSERT, then the UPDATE and then the DELETE operation. We then see the second INSERT which we are interested in.

 

[oracle@pdemora062rhv goldengate]$ logdump

Oracle GoldenGate Log File Dump Utility for Oracle
Version 11.2.1.0.1 OGGCORE_11.2.1.0.1_PLATFORMS_120423.0230

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

 

Logdump 41 >open /u01/app/goldengate/dirdat/gg000000
Current LogTrail is /u01/app/goldengate/dirdat/gg000000
Logdump 42 >ghdr on
Logdump 43 >detail on
Logdump 44 >n

2012/07/21 04:45:13.522.696 FileHeader           Len  1087 RBA 0
Name: *FileHeader*
 3000 01cd 3000 0008 4747 0d0a 544c 0a0d 3100 0002 | 0…0…GG..TL..1…
 0003 3200 0004 2000 0000 3300 0008 02f1 eb7c 6dc9 | ..2… …3……|m.
 9208 3400 003f 003d 7572 693a 7064 656d 6f72 6130 | ..4..?.=uri:pdemora0
 3631 7268 763a 6173 6764 656d 6f3a 6173 6767 726f | 61rhv:asgdemo:asggro
 7570 3a63 6f6d 3a61 753a 3a75 3031 3a61 7070 3a67 | up:com:au::u01:app:g
 6f6c 6465 6e67 6174 6536 0000 2500 232f 7530 312f | oldengate6..%.#/u01/
 6170 702f 676f 6c64 656e 6761 7465 2f64 6972 6461 | app/goldengate/dirda

Logdump 45 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    29  (x001d)   IO Time    : 2012/07/21 04:48:15.066.791
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        401       AuditPos   : 32627632
Continued  :     N  (x00)     RecCount   :     1  (x01)

2012/07/21 04:48:15.066.791 Insert               Len    29 RBA 1095
Name: SH.MYTAB
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3100 0100 1000 0000 0c49 4e53 | ……..1……..INS
 4552 5445 4420 726f 77                            | ERTED row
Column     0 (x0000), Len     5 (x0005)
Column     1 (x0001), Len    16 (x0010)

Logdump 46 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    28  (x001c)   IO Time    : 2012/07/21 04:50:37.094.598
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        401       AuditPos   : 33013776
Continued  :     N  (x00)     RecCount   :     1  (x01)

2012/07/21 04:50:37.094.598 FieldComp            Len    28 RBA 1235
Name: SH.MYTAB
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3100 0100 0f00 0000 0b55 5044 | ……..1……..UPD
 4154 4544 2072 6f77                               | ATED row
Column     0 (x0000), Len     5 (x0005)
Column     1 (x0001), Len    15 (x000f)

Logdump 47 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     B  (x42)
RecLength  :     9  (x0009)   IO Time    : 2012/07/21 04:51:00.119.007
IOType     :     3  (x03)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        401       AuditPos   : 33041936
Continued  :     N  (x00)     RecCount   :     1  (x01)

2012/07/21 04:51:00.119.007 Delete               Len     9 RBA 1374
Name: SH.MYTAB
Before Image:                                             Partition 4   G  s
 0000 0005 0000 0001 31                            | ……..1
Column     0 (x0000), Len     5 (x0005)

Logdump 48 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    29  (x001d)   IO Time    : 2012/07/21 04:54:12.124.542
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        401       AuditPos   : 33273872
Continued  :     N  (x00)     RecCount   :     1  (x01)

2012/07/21 04:54:12.124.542 Insert               Len    29 RBA 1494
Name: SH.MYTAB
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3100 0100 1000 0000 0c49 4e53 | ……..1……..INS
 4552 5445 4420 726f 77                            | ERTED row
Column     0 (x0000), Len     5 (x0005)
Column     1 (x0001), Len    16 (x0010)

Logdump 49 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    29  (x001d)   IO Time    : 2012/07/21 05:02:04.056.719
IOType     :     5  (x05)     OrigNode   :   255  (xff)
TransInd   :     .  (x03)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :        401       AuditPos   : 36183568
Continued  :     N  (x00)     RecCount   :     1  (x01)

2012/07/21 05:02:04.056.719 Insert               Len    29 RBA 1634
Name: SH.MYTAB
After  Image:                                             Partition 4   G  s
 0000 0005 0000 0001 3200 0100 1000 0000 0c49 4e53 | ……..2……..INS
 4552 5445 4420 726f 77                            | ERTED row
Column     0 (x0000), Len     5 (x0005)
Column     1 (x0001), Len    16 (x0010)

 

The Replicat process which has abended, is now altered to start at a specific RBA and then restarted.

We use the ALTER REPLICAT testrep EXTRBA 1634 command to reposition the replicat process to start reading records from a specific position in the trail file.

We now see that the replicat has started running and has processed the second insert statement.
SQL> select * from mytab;

        ID COMMENTS
———- ——————–
         1 INSERTED row
         2 INSERTED row

 

Statistics now show 2 insert operations – note no deletes and updates processed ….

 

GGSCI (pdemora062rhv.asgdemo.asggroup.com.au) 2> stats replicat testrep

Sending STATS request to REPLICAT TESTREP …

Start of Statistics at 2012-07-21 06:52:43.

Replicating from SH.MYTAB to SH.MYTAB:

*** Total statistics since 2012-07-21 05:19:12 ***
        Total inserts                                      2.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                   2.00

RMAN 11g new feature – Backup Fast Recovery Area (FRA) to Disk

$
0
0

 

In Oracle 11gR2, there is a good new feature whereby we can backup the fast or flash recovery area to a disk location which could be a remote destination via a NFS attached file system.

In earlier releases, we could only backup the Flash Recovery Area to tape and not disk via RMAN.

Recovery from disk in most cases is significantly faster than recovery from a tape device especially when we have a very large tape library and the right tape needs to be located or maybe we have a case where all tape drives are already in use and we have to perform a critical database recovery at the same time.

The OSS or Oracle Suggested Strategy for backups involves a disk based backup method and level 0 datafile copies and subsequent leevl 1 incremental backupsets are all stored on local disk.

So what happens if we lose the local server and with it all our disk based backups? – we have to do a full database restore from tape which can be very time consuming.

The 11g RMAN command BACKUP RECOVERY AREA TO DESTINATION lets us specify a secondary backup location for all our backups which are stored in the Fast Recovery Area.

In this example we are backing up the FRA on a daily basis after the OSS backup to disk completes via the command:

backup recovery area to destination ‘/mnt/remote/backups/orasql/FRA_BACKUP/’

If we run a LIST BACKUP OF DATABASE we can see that there are two copies of the backupset #479. One stored in the FRA on the local server /u01 file system and one in the remote location which is a file server attached via NFS to the local server.

List of Backup Pieces for backup set 479 Copy #1
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    565     1   AVAILABLE   /u01/app/oracle/flash_recovery_area/SQLFUN/backupset/2012_08_01/o1_mf_nnnd1_ORA_OEM_LEVEL_0_81jgs7qf_.bkp

  Backup Set Copy #2 of backup set 479
  Device Type Elapsed Time Completion Time Compressed Tag
  ----------- ------------ --------------- ---------- ---
  DISK        00:00:18     01-AUG-12       NO         ORA_OEM_LEVEL_0

    List of Backup Pieces for backup set 479 Copy #2
    BP Key  Pc# Status      Piece Name
    ------- --- ----------- ----------
    571     1   AVAILABLE   /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_08_01/o1_mf_nnnd1_ORA_OEM_LEVEL_0_81jh0x4l_.bkp

Let us now test a restore using this remote backup location by simulating a total server failure where we lose all our disk based backups residing on the local server which has crashed.

To simulate a total server crash I do the following:

Shutdown the database.
Rename the directory holding the data files of the database
Rename the spfile and init.ora file
Rename the FRA directory for the database so that RMAN cannot find the local backups in the FRA

When we perform the restore and recovery, RMAN finds that it cannot access the backups stored in the FRA (because we have renamed the directory).

It will now try and restore the copy of the FRA backups which was stored in the remote location.

This can be seen from the RMAN ouput like “reading from backup piece /mnt/remote/backups/orasql/FRA_BACKUP” ….
RESTORE SPFILE

RMAN> startup force nomount;

startup failed: ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file '/u01/app/oracle/product/11.2.0/dbhome_1/dbs/initsqlfun.ora'

starting Oracle instance without parameter file for retrieval of spfile
Oracle instance started

Total System Global Area     158662656 bytes

Fixed Size                     2211448 bytes
Variable Size                 92275080 bytes
Database Buffers              58720256 bytes
Redo Buffers                   5455872 bytes

RMAN> restore spfile from '/mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_ncsnf_TAG20120528T080501_7w5oo92y_.bkp';

Starting restore at 28-MAY-12
using channel ORA_DISK_1

channel ORA_DISK_1: restoring spfile from AUTOBACKUP /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_ncsnf_TAG20120528T080501_7w5oo92y_.bkp
channel ORA_DISK_1: SPFILE restore from AUTOBACKUP complete
Finished restore at 28-MAY-12

RMAN> shutdown immediate;

Oracle instance shut down

RMAN> startup nomount;

connected to target database (not started)
Oracle instance started

Total System Global Area     801701888 bytes

Fixed Size                     2217632 bytes
Variable Size                490735968 bytes
Database Buffers             301989888 bytes
Redo Buffers                   6758400 bytes

RESTORE CONTROFILE

RMAN> restore controlfile from  '/mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_ncsnf_TAG20120528T080501_7w5oo92y_.bkp';

Starting restore at 28-MAY-12
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=134 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:01
output file name=/u01/app/oracle/oradata/sqlfun/control01.ctl
output file name=/u01/app/oracle/flash_recovery_area/sqlfun/control02.ctl
Finished restore at 28-MAY-12

RMAN> alter database mount;

database mounted
released channel: ORA_DISK_1

RESTORE DATABASE

RMAN> catalog start with  '/mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN';

searching for all files that match the pattern /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN

List of Files Unknown to the Database
=====================================
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_annnn_TAG20120528T081134_7w5oob2y_.bkp
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_annnn_TAG20120528T093614_7w5ongvb_.bkp
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_ncsnf_TAG20120528T080501_7w5oo92y_.bkp
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_nnndf_TAG20120528T080501_7w5onhys_.bkp

Do you really want to catalog the above files (enter YES or NO)? YES
cataloging files...
cataloging done

List of Cataloged Files
=======================
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_annnn_TAG20120528T081134_7w5oob2y_.bkp
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_annnn_TAG20120528T093614_7w5ongvb_.bkp
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_ncsnf_TAG20120528T080501_7w5oo92y_.bkp
File Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_nnndf_TAG20120528T080501_7w5onhys_.bkp

RMAN> restore database;

Starting restore at 28-MAY-12
using channel ORA_DISK_1

channel ORA_DISK_1: starting datafile backup set restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
channel ORA_DISK_1: restoring datafile 00001 to /u01/app/oracle/oradata/sqlfun/system01.dbf
channel ORA_DISK_1: restoring datafile 00002 to /u01/app/oracle/oradata/sqlfun/sysaux01.dbf
channel ORA_DISK_1: restoring datafile 00003 to /u01/app/oracle/oradata/sqlfun/undotbs01.dbf
channel ORA_DISK_1: restoring datafile 00004 to /u01/app/oracle/oradata/sqlfun/users01.dbf
channel ORA_DISK_1: restoring datafile 00005 to /u01/app/oracle/oradata/sqlfun/threatened_fauna_data.dbf
channel ORA_DISK_1: reading from backup piece /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_nnndf_TAG20120528T080501_7w5onhys_.bkp
channel ORA_DISK_1: piece handle=/mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_nnndf_TAG20120528T080501_7w5onhys_.bkp tag=TAG20120528T080501
channel ORA_DISK_1: restored backup piece 1
channel ORA_DISK_1: restore complete, elapsed time: 00:00:55
Finished restore at 28-MAY-12

RECOVER DATABASE

RMAN> list backup of archivelog all;

List of Backup Sets
===================

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
164     68.50K     DISK        00:00:00     28-MAY-12
        BP Key: 164   Status: AVAILABLE  Compressed: NO  Tag: TAG20120528T081134
        Piece Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_annnn_TAG20120528T081134_7w5oob2y_.bkp

  List of Archived Logs in backup set 164
 Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    386     8176419    28-MAY-12 8176669    28-MAY-12

BS Key  Size       Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
165     68.50K     DISK        00:00:00     28-MAY-12
        BP Key: 165   Status: AVAILABLE  Compressed: NO  Tag: TAG20120528T093614
        Piece Name: /mnt/remote/backups/orasql/FRA_BACKUP/SQLFUN/backupset/2012_05_28/o1_mf_annnn_TAG20120528T093614_7w5ongvb_.bkp

  List of Archived Logs in backup set 165
  Thrd Seq     Low SCN    Low Time  Next SCN   Next Time
  ---- ------- ---------- --------- ---------- ---------
  1    386     8176419    28-MAY-12 8176669    28-MAY-12

RMAN> recover database until sequence 387;

Starting recover at 28-MAY-12
using channel ORA_DISK_1

starting media recovery
media recovery complete, elapsed time: 00:00:01

Finished recover at 28-MAY-12

RMAN> alter database open resetlogs;

database opened

Migrating the 11i E-Business Suite database to Linux

$
0
0

This note describes the process used to migrate the EBS Release 11i (11.5.10.2) 10g R2 database from a Solaris platform to a Linux 64 bit platform.

As part of the Release 12 upgrade preparation, rather than perform  a Solaris/Linux platform migration and 11i to R12.1.3. upgrade at the same time, we are relocating the database to the Linux platform at  an earlier stage.

The idea was to perform the following operations before the R12.1 upgrade and not as part of the upgrade project to lower the risk, complexity as well as down time. This has been advocated by Oracle as well as one of the best practices and the preferred approach.

Our current configuration was:

Two nodes (both Solaris)

Node A – database plus concurrent manager

Node B – Forms/Web

 

Remember, Linux 64 bit is not supported for the 11i Application Tier. It is only supported in R12.

So at this stage, we are only relocating the database to the Linux 64 bit platform.

The configuration after the database move will be:

Node A –  concurrent manager  (Solaris)

Node B – Forms/Web (Solaris)

Node C –  Database (Linux)

This configuration will be interim until the Application Tiers are also migrated to Linux as part of the R12.1 upgrade.

 

Steps

 

On the Source (Solaris) Admin Tier:

 

1)

Download and apply patches 4872830 and 7225862.

2)

Connect as SYSTEM  and run:

$AD_TOP/patch/115/sql/adclondb.sql 10

(This will create the scripts -  adcrdb.sql and adpostcrdb.sql)

3)

Copy auque1.sql script from the $AU_TOP/patch/115/sql directory on the source administration server node to the source database server node

4)

Copy $AU_TOP/patch/115/import/auexpdp.dat from the source administration server node to the directory on the database server node where the export dump files are to be created

5)

Certain ConText and Spatial objects are not handled by the import process. The consolidated export/import utility patch 4872830 that was applied to the source administration server node earlier contains a perl script, dpost_imp.pl, that you can run to generate an AutoPatch driver file.

perl $AU_TOP/patch/115/driver/dpost_imp.pl dpostImp.drv

 

On the Source (Solaris) Database Tier:

 

As SYS as SYSDBA ….

1)

Purge the recycle bin.

SQL> purge dba_recyclebin

2)

Execute the auque1.sql script. This will generate the auque2.sql in the current directory.

SQL> @auque1.sql

3)

Create the directory for the Export Data Pump job

SQL> create directory dmpdir as ‘/u01/oracle/oraexp’;

4)

Edit the Export parameter file auexpdp.dat :

directory=dmpdir

dumpfile=aexp%U.dmp

filesize=1048576000

full=y

exclude=SCHEMA:”=’MDDATA’”

exclude=SCHEMA:”=’OLAPSYS’”

exclude=SCHEMA:”=’ORDSYS’”

exclude=SCHEMA:”=’DMSYS’”

exclude=SCHEMA:”=’OUTLN’”

exclude=SCHEMA:”=’ORDPLUGINS’”

#transform=oid:n

logfile=expdpapps.log

 

5)

Export the database

expdp “‘/ as sysdba’” parfile= auexpdp.dat

 

6)

After the export is complete, shut down the source (Solaris) database and listener.

 

On the Target (Linux) Database Tier:

1)

Create a working directory for all the database creation scripts

2)

Copy from the source admin tier $APPL_TOP/admin to this directory all these type of files:

*.pls

*.sql

 

3)

Edit the adcrdb.sql as required if the source and target servers have a different directory structure

Copy from the source database tier $ORACLE_HOME/dbs/init<SID>.ora to the target database $ORACLE_HOME/dbs

 

4)

Edit the init.ora as required if directory paths are different on source and target servers

For example we may need to change the parameters like Control_files, background_dump_dest, core_dump_deest, user_dump_dest, utl_file_dir

Comment out  the parameters undo_tablespace and undo_management

 

5)

Create nls/data/9idata directory.

On the database server node, as the owner of the Oracle RDBMS file system and database instance, run the $ORACLE_HOME/nls/data/old/cr9idata.pl script to create the $ORACLE_HOME/nls/data/9idata directory.

Ensure the environment variable ORA_NLS10 points to this location.

 

6)

Ensure environment variables on the target database server have been properly set up – ORACLE_HOME and ORACLE_SID

 

7)

Create the target database instance

SQL> startup nomount pfile=’init<ORACLE_SID>.ora’;

SQL> @adcrdb.sql

 

8)

After target database instance created, shutdown the instance

Enable AUM in the database – remove the comments earlier made for the parameters undo_tablespace and undo_management

 

SQL>  create spfile from pfile;

SQL> startup;

 

9)

Setup the SYS schema

 

As SYS as SYSDBA run

SQL> @addb1020.sql

 

10)

Setup the SYSTEM schema

 

Connect as SYSTEM/manager

SQL>@adsy1020.sql

SQL> @ adsysapp2.sql

 

11)

Install Java Virtual Machine

As SYSTEM

SQL> @adjv1020.sql

 

12)

Install other required components like XDB,OLAP, interMedia, ConText …

 

As SYSTEM

SQL> @admsc1020.sql FALSE SYSAUX TEMP

 

13)

Run adpostcrdb.sql script to convert tablespaces to locally managed

 

As SYS as SYSDBA

SQL>@adpostcrdb.sql

 

14)

Disable automatic gathering of statistics

 

SQL> shutdown immediate;

SQL> startup restrict;

SQL> @adstats.sql

 

Preparation for import

 

1)

Copy all the export dump files and the export parameter file  from source database node to appropriate directory on the target database node

2)

If we have database objects which have external dependencies resolved via database links, then copy the target database tnsnames.ora file to the source database $ORACLE_HOME/network/admin. This will ensure objects based on database links will compile when the import is happening as well as when we run the utlrp.sql script after the import

3)

Shutdown and restart the database

4)

If ARCHIVELOG mode is enabled turn off archiving

5)

Create directory for Import Data Pump

6)

Set parameter resumable_timeout to 7200 (2 hours)

7)

Turn on autoextend for all the database data files as well as temporary files

8)

Edit the export parameter file  - rename it to auimp.dat

directory=dmpdir

dumpfile=aexp%U.dmp

full=y

transform=oid:n

logfile=import.log

 

Importing the database

 

from a VNC session start the import

impdp “‘/ as sysdba’” parfile= auimp.dat

 

In our case database size was 150 GB and import took 6 hours.

Monitor the import log as well as the database alert log for any errors related to space or import data pump job hanging.

May encounter a MAXEXTENTS reached error for ‘DR$FND_LOBS_CTX$I’ and the import will hang – we will have to use the ALTER TABLE <table_name>  storage (maxextents unlimited) command to resume the import data pump job in case it is hanging

 

After the Import

 

1)

Reset advanced queues

As SYS as SYSDBA run

SQL> @auque2.sql

 

2)

Run adgrants.sql

As SYS as SYSDBA run

SQL> @adgrants.sql <APPLSYS schema name>

 

3)

Grant create procedure privilege on CTXSYS

Connect as APPS user

SQL> @adctxprv.sql <SYSTEM password> CTXSYS

 

4)

Create OWA_MATCH package

Download patch 3835781 and unzip on the target database node

As SYS as SYSDBA

SQL> @patch.sql

 

5)

Gather statistics for SYS schema

shutdown normal;

startup restrict;

@adstats.sql

shutdown normal;

startup;

 

6)

Recompile INVALID objects

SQL> @?/rdbms/admin/utlrp.sql

 

7)

Compare object count in APPS and APPLSYS schemas in both source and target database – identify any missing objects

 

8)

Fix Korean lexers

sqlplus “/ as sysdba” @$ORACLE_HOME/ctx/sample/script/drkorean.sql

 

Note – the ctx/sample directory is installed when you install the Companion CD

 

 

Implement and run AutoConfig

(ensure database listener is running at this stage)

 

1)

connect as apps and execute the following

EXEC FND_CONC_CLONE.SETUP_CLEAN;

Commit;

 

2)

Create the appsutil.zip file

 On  the source  Application Tier (as the APPLMGR user)

 

  • Log in to the APPL_TOP        environment (source the environment file)
  • Create appsutil.zip file
    perl        <AD_TOP>/bin/admkappsutil.pl
  • This will create        appsutil.zip in $APPL_TOP/admin/out .

 

On the Database Tier (as  the ORACLE user):

  • Copy or FTP the        appsutil.zip file to the <RDBMS ORACLE_HOME>
  • cd <RDBMS        ORACLE_HOME>
       unzip        -o appsutil.zip

 

3)

Generate the database context file

 

cd <RDBMS ORACLE_HOME>
. <CONTEXT_NAME>.env
cd <RDBMS ORACLE_HOME>/appsutil/bin
perl adbldxml.pl tier=db appsuser=<APPSuser>

cd <RDBMS ORACLE_HOME>/appsutil/bin
adconfig.sh contextfile=<CONTEXT>

 

4)

After autoconfig has been run check if the FND_NODES table has been populated

As APPS

SQL> Select node_name,support_DB from FND_NODES;

 

5)

Now run autoconfig on the ADMIN and Forms/web server nodes

 

Change the XML files …. (in this case we have to provide details about the new Linux hostname, the listener port etc )

 

<jdbc_url oa_var=”s_apps_jdbc_connect_descriptor”>jdbc:oracle:thin:@(DESCRIPTION=(LOAD_BALANCE=YES)(FAILOVER=YES)(ADDRESS_LIST=(ADDRESS=(PROTOCOL=tcp)(HOST=KENS-ORASQL-001)(PORT=1521)))(CONNECT_DATA=(SID=CLMTS10G)))</jdbc_url>

<dbhost oa_var=”s_dbhost”>kens-orasql-001</dbhost>

<domain oa_var=”s_dbdomain”>corporateict.domain</domain>

<dbport oa_var=”s_dbport” oa_type=”PORT”>1521</dbport>

 

6)

After running autoconfig, check  the FND_NODES table to ensure the right values are there for SUPPORT_CP, SUPPORT_FORMS, SUPPORT_WEB etc

 

7)

Ensure the environment variable APPLPTMP matches with the database UTL_FILE_DIR values

 

8)

From adadmin ….

select

“Recreate grants and synonyms for APPS schema” task from the Maintain Applications Database Objects menu

Compile flexfield data in AOL tables

 

9)

After the services have been started on the admin and web/forms tier connect to 11i Oracle Applications instance and run the concurrent request to create Create DQM indexes
Create DQM indexes by following these steps:

  • Log on to Oracle       Applications with the “Trading Community Manager”       responsibility
  • Click Control > Request       > Run
  • Select “Single       Request” option
  • Enter “DQM Staging       Program” name
  • Enter the following       parameters:
    • Number of Parallel Staging        Workers: 4
    • Staging Command:        CREATE_INDEXES
    • Continue Previous        Execution: NO
    • Index Creation: SERIAL
    • Click “Submit”

 

10)

On the admin node use auto patch and run the driver file which was earlier generated

dpostImp.drv

 

Review MOS notes :

Interoperability Notes Oracle EBS Release 11i with Oracle Database 10.2.0.4 [ID 1135973.1]

10g Release 2 Export/Import Process for Oracle Applications Release 11i [ID 362205.1]

A look at Parsing and Sharing of Cursors

$
0
0

Parsing is the first stage in processing a SQL statement – the other two being Execute and Fetch stages.

Parse once – execute many is a very important performance tuning goal.

What does parsing involve

  • Syntax check – is the SQL statement syntactically correct
  • Semantic check – is the SQL statement meaningful or semantically correct. Does the table exist, are the columns in the SQL part of the table, does the user have the required privileges on the table etc
  • Shared Pool check – the database uses a hashing algorithm to generate a hash value for every SQL statement executed and this hash value is checked in the shared pool to see if any existing and already parsed statement has the same hash value. It can then be reused.

Hard Parse vs. Soft Parse

Soft Parse – when the parsed representation of a submitted SQL statement exists in the shared pool and can be shared – it is a library cache hit. Performs syntax and semantic checks but avoids the relatively costly operation of query optimization. Reuses the existing shared SQL area which already has the execution plan required to execute the SQL statement.

Hard Parse – if a statement cannot be reused or if it the very first time the SQL statement is being loaded in the library cache, it results in a hard parse. Also when a statement is aged out of the shared pool (because the shared pool is limited in size), when it is reloaded again, it results in another hard parse. So size of the shared pool can also affect the amount of parse calls.

Hard parse is thus when a SQL statement is executed, and the SQL statement is either not in the shared pool, or it is in the shared pool but it cannot be shared. – it is a library cache miss

Hard parse is a CPU intensive operation as the database accesses the library cache and data dictionary cache numerous times to check the data dictionary. When it does this it has to take out a latch which is a low level lock.

Excessive hard parsing can cause performance degradation due to library cache locking and mutexes.

What happens the first time a SQL statement is executed?

The RDBMS will create a hash value for text of the statement and then uses that hash value to check for parsed SQL statements already existing in the shared pool with the same hash value. Since it is the first time it is being executed, there is no hash value match and a hard parse occurs.

Costly phase of query optimization is performed now. It then stores the parse tree and execution plan in a shared SQL area.

When it is first parsed a parent and a single child are created.

 

Sharing of SQL code – next time same SQL is executed

The text of the SQL statement being executed is hashed.

If no matching hash value found, perform a hard parse.

If matching hash value exists in shared pool, compare the text of matched statement in shared pool with the statement which has been hashed. They need to be identical character for character, white spaces, case, comments etc.

Is objects referenced in the SQL statement match the objects referenced by the statement in the shared pool which is being attempted to be shared. If user A and user B both own EMP, SELECT * FROM EMP issued by user A will not be the same as the same identical statement being issued by user B.

Bind variables being used must match is terms of name, data type and length

Both the session’s environment needs to be identical. For example if at the database level we have OPTIMIZER_MODE=ALL_ROWS, but user A has used an ALTER SESSION command to change optimization goal for his session to say FIRST_ROWS, then that session will notbe able to use existing shared SQL areas which have a different optimization goal for them.

 

What happens when SQL is not shared


If using any of the criteria above, an identical statement does not exist, then the database allocates a new shared SQL area in the shared pool.
A statement with the same syntax but different semantics uses a child cursor.
Shared SQL area

Exists in the shared pool and contains the parse tree and execution plan for a single SQL statement.

Private SQL Area

Exists in the PGA of each separate session executing a SQL statement and points to a shared SQL area (in SGA). Many sessions PGA can point to the same Shared SQL Area

 

Let us run these three SQL statements.

select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type='TABLE';
select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type='INDEX';
select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type='RULE';

Note each SQL statement is unique and has its own SQL_ID.

Since each statement is being executed the first time, it is (hard) parsed as well.

 
SQL> select SQL_ID,EXECUTIONS,LOADS,PARSE_CALLS from v$sqlarea where sql_text like 'select /*TEST_NO_BINDS*/%';

SQL_ID        EXECUTIONS      LOADS PARSE_CALLS
------------- ---------- ---------- -----------
2tj15h6w34xcc          1          1           1
63zqvyxt1a0an          1          1           1
d6tqd33w7u8xa          1          1           1

Now let us run one of the earlier SQL statements again.

select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type=’TABLE’;

Since the parsed form of this SQL statement already exists in the shared pool, it is re-used. Note loads remain the same(1), but both executions and parse_calls have increased (2). In this case a soft parse has happened.

How do we tell if a parse has been a hard parse or a soft parse? The V$SQL or V$SQLAREA views will not provide us this information. We will have to turn on tracing and use TKPROF to identify this.

 
SQL> select SQL_ID,EXECUTIONS,LOADS,PARSE_CALLS from v$sqlarea where sql_text like 'select /*TEST_NO_BINDS*/%';

SQL_ID        EXECUTIONS      LOADS PARSE_CALLS
------------- ---------- ---------- -----------
2tj15h6w34xcc          1          1           1
63zqvyxt1a0an          1          1           1
d6tqd33w7u8xa          2          1           2

We can also see how long a statement has been around in the shared pool. Note SQL statements will get invalidated and be flushed from the shared pool and will also age out in case Oracle needs to load new statements in the shared pool. Once the shared pool gets full, Oracle will use a LRU algorithm to age out the ‘older’ SQL statements.

 
SQL> select first_load_time from v$sqlarea where sql_text like 'select /*TEST_NO_BINDS*/%';

FIRST_LOAD_TIME
----------------------------------------------------------------------------
2012-09-04/05:26:29
2012-09-04/05:26:40
2012-09-04/05:25:31

Let us now look at the Version Count.

Remember that non-sharing of SQL and consequent high version count is one of the top causes of library cache contention and database ‘hang’ situations.

If there are too many versions if the same cursor, the parse engine has to search though a complete list of all the versions until it finds the ‘right’ version. This activity can take a lot of CPU cycles as well.

 
SQL>  select SQL_ID,EXECUTIONS,LOADS,PARSE_CALLS,version_count  from v$sqlarea where sql_text like 'select /*TEST_NO_BINDS*/%';

SQL_ID        EXECUTIONS      LOADS PARSE_CALLS VERSION_COUNT
------------- ---------- ---------- ----------- -------------
2tj15h6w34xcc          1          1           1             1
63zqvyxt1a0an          1          1           1             1
d6tqd33w7u8xa          2          1           2             1

SQL>  select SQL_ID,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TEST_NO_BINDS*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
2tj15h6w34xcc            0          1          1           1
63zqvyxt1a0an            0          1          1           1
d6tqd33w7u8xa            0          2          1           2

Let us now connect as another user, SCOTT who also owns a table called MYOBJECTS.

 

SQL> select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type=’TABLE’;

 

We see that because there are two objects existing (SYSTEM.MYOBEJCTS and SCOTT.MYOBJECTS), the query “select /*TEST_NO_BINDS*/ distinct owner from myobjects” has led to another child and another version of the SQL statement being created in the shared pool.

Note the SQL_ID does not change regardless of the version count increasing!

 
SQL> select child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_id='d6tqd33w7u8xa';

CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------ ---------- ---------- -----------
           0          2          1           2
           1          1          3           1

What happens if we change some optimization setting in the users environment and run the same SQL statement again.

 
SQL> alter session set optimizer_mode=FIRST_ROWS;

Session altered.

SQL> select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type='TABLE';

SQL> select child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_id='d6tqd33w7u8xa';

CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------ ---------- ---------- -----------
           0          2          1           2
           1          1          3           1
           2          1          2           1

SQL>  select version_count,invalidations from v$sqlarea where sql_id='d6tqd33w7u8xa';

VERSION_COUNT INVALIDATIONS
------------- -------------
            3             3

Why was the cursor not shared?

 
SQL>  select hash_value,address from v$sqlarea where sql_text like 'select /*TEST_NO_BINDS*/ distinct owner from myobjects where object_type=''TABLE''';

HASH_VALUE ADDRESS
---------- ----------------
4168950698 000000008D920748

SQL> select * from v$sql_shared_cursor where address ='000000008D920748';

SQL_ID        ADDRESS          CHILD_ADDRESS    CHILD_NUMBER U S O O S L F E B P I S T A B D L T B I I R L I O E M U T N F A I T D L D B P C S C P T M B M R O P M F L P L A F L R L H P B
------------- ---------------- ---------------- ------------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
d6tqd33w7u8xa 000000008D920748 00000000985A42D8            0 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
d6tqd33w7u8xa 000000008D920748 000000008A8D4E68            1 N N N N N N N N N N N N N Y N N N Y N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
d6tqd33w7u8xa 000000008D920748 000000008D9D9DA0            2 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N N N N N N N N N N N N N

 

Check the columns in the V$SQL_SHARED_CURSOR view and look for occurrences of ‘Y’.

We can see that the reason why the cursor was not shared is because of AUTH_CHECK_MISMATCH and TRANSLATION_MISMATCH in the first case and because of OPTIMIZER_MODE_MISMATCH reason in the second case

CURSOR_SHARING=SIMILAR and FORCE – some examples

$
0
0

The CURSOR_SHARING parameter basically influences the extent to which SQL statements (or cursors) can be shared.

 The possible values are EXACT which is the default value and SIMILAR and FORCE.

The official definitions of SIMILAR and FORCE are:

cursor_sharing=similar: “Causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized.”

 cursor_sharing=FORCE: “Forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.”

So basically it means that the SQL statement will be shared even though literals are different as long as it does not affect the execution plan of the statement. This is the case for SIMILAR setting.

The cursor_sharing=SIMILAR parameter has been deprecated in 11gR2 because it has been found that the use of this parameter could potentially have a lot of performance implications related to the number of child cursors created for the single parent.  

 In versions prior to 11g R2, the was a limit on the number of child cursors  which can be associated with a single parent. It was 1024 and once this number was crossed the parent was marked obsolete and this invalidated it was well as all associated child cursors.

 But not having this upper limit was being found to have caused a lot of CPU usage and waits on mutexes and library cache locks in 11gR2  caused by searching the library cache for matching cursors and it was felt that  having many child cursors all associated with one parent cursor could perform much worse than having many parent cursors that would be seen with having the default setting of cursor_sharing = EXACT

 Using this parameter also was bypassing one of the big improvements made in 11g which was Adaptive Cursor Sharing.

Let us examine the behaviour with CURSOR_SHARING set to SIMILAR as opposed to FORCE and note the differences between the two.

SQL> alter system set cursor_sharing='SIMILAR';

System altered.

Let us now run these three SQL statements which are identical in text, but only differ in the LITERAL values

SQL> select /*TEST_SIMILAR*/  count(*) from system.myobjects where owner='SYS';

  SQL> select /*TEST_SIMILAR*/  count(*) from system.myobjects where owner='SYSTEM';

SQL> select /*TEST_SIMILAR*/  count(*) from system.myobjects where owner='SCOTT';

We can see how the three different SQL statements has been transformed into one single SQL statement – note the part – where owner=:”SYS_B_0″ and also the fact that there is only one single SQL_ID.

Oracle has automatically replaced the literal with a bind value.

 SQL>  select sql_id,sql_text from v$sql where sql_text like 'select /*TEST_SIMILAR*/%';

SQL_ID        SQL_TEXT
------------- --------------------------------------------------------------------------------
7qsnvbzwh79tj select /*TEST_SIMILAR*/  count(*) from system.myobjects where owner=:"SYS_B_0"

We see that the SQL statement has been loaded only once and executed 3 times

SQL> select sql_id,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TEST_SIMILAR*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
7qsnvbzwh79tj            0          3          1           3

Note the number of versions – it is only one – which shows that the SQL statement has been shared.

SQL> select version_count from v$sqlarea where sql_id='7qsnvbzwh79tj';

VERSION_COUNT
-------------
            1

We need to keep in mind however that using inequalities or LIKE etc will not share cursors even if using SIMILAR

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where owner like 'A%';

SQL>  select /*TEST_SIMILAR*/  distinct owner from myobjects where owner like 'B%';

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where owner like 'C%';

Note the child cursors created and also the version count – this shows that even though we were using SIMILAR for cursor sharing, the SQL statement has not been shared.

SQL> select sql_id,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TEST_SIMILAR*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
4hxkxzpas5hyq            0          1          1           1
4hxkxzpas5hyq            1          1          1           1
4hxkxzpas5hyq            2          1          1           1

SQL> select version_count from v$sqlarea where sql_id='4hxkxzpas5hyq';

VERSION_COUNT
-------------
3

Let us now change the CURSOR _SHARING to FORCE and see what the difference is.

SQL> alter system set cursor_sharing='FORCE';

System altered.

SQL> select /*FORCE*/  distinct owner from myobjects where owner like 'A%';

SQL>  select /* FORCE */  distinct owner from myobjects where owner like 'B%';

SQL> select /* FORCE */  distinct owner from myobjects where owner like 'C%';

Note that there is only one child and the version count is one which shows that this SQL statement has been shared.

SQL> select version_count from v$sqlarea where sql_id='6c4f9xwp19pff';

VERSION_COUNT
-------------
1

SQL>  select sql_id,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*FORCE*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
6c4f9xwp19pff            0          3          1           3

Let us now look at the effect Histograms have on the SIMILAR setting for the cursor_sharing parameter.

The presence of Histogram statistics on a column being used in the WHERE clause can lead to the cursor not being shared because there is the possibility of the optimizer choosing different plans for different values. In that case we will see multiple children being created as in the example below.

The CBO considers these literals to be UNSAFE

In this case we have created an index on the OBJECT_TYPE column and when we have gathered statistics for the table, Oracle has created histograms for that indexed column.

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where object_type='A';

no rows selected

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where object_type='B';

no rows selected

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where object_type='C';

no rows selected

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where object_type='D';

no rows selected

SQL> select /*TEST_SIMILAR*/  distinct owner from myobjects where object_type='E';
no rows selected

SQL> select sql_id,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TEST_SIMILAR*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
2m80tr9fhhbwn            0          1          1           1
2m80tr9fhhbwn            1          1          1           1
2m80tr9fhhbwn            2          1          1           1
2m80tr9fhhbwn            3          1          1           1
2m80tr9fhhbwn            4          1          1           1

SQL>  select version_count from v$sqlarea where sql_id='2m80tr9fhhbwn';

VERSION_COUNT
-------------
            5

Let us now verify why the cursor was not shared.

SQL> select hash_value,address from v$sqlarea where sql_text like 'select /*TEST_SIMILAR*/%';

HASH_VALUE ADDRESS
---------- ----------------
1560817556 0000000095B58FD8

SQL> select * from v$sql_shared_cursor where address='0000000095B58FD8';

SQL_ID        ADDRESS          CHILD_ADDRESS    CHILD_NUMBER U S O O S L F E B P I S T A B D L T B I I R L I O E M U T N F A I T D L D B P C S C P T M B M R O P M F L P L A F L R L H P B
------------- ---------------- ---------------- ------------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
2m80tr9fhhbwn 0000000095B58FD8 000000009160F6B8            0 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
2m80tr9fhhbwn 0000000095B58FD8 0000000091633E68            1 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N
2m80tr9fhhbwn 0000000095B58FD8 0000000091741E10            2 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N
2m80tr9fhhbwn 0000000095B58FD8 000000008DA39108            3 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N
2m80tr9fhhbwn 0000000095B58FD8 00000000947D5B30            4 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N

We can see that the reason for the cursor not being shared is because of the ‘Y’ in the HASH_MATCH_FAILED column.

Note that this is a new column added to the V$SQL_SHARED_CURSOR view in 11gR2 . In this particular case is it because of the mismatched histogram data

Let us look at another case where cursors are not shared because of what is called a STB_OBJECT_MISMATCH case.

STB_OBJECTS refers to SQL Management Objects like SQL Profiles and SQL Plan Baselines.

If we are using SQL Plan Baselines then we see a new child cursor being created between the first and second executions of the SQL statement because a new SQL Management Object (in this case baseline) was created invalidating the cursor and causing a hard parse the next time the same SQL was executed .

In this example, with cursor_sharing=SIMILAR, because we are using SQL PLan Baselines, a new SQL PLan Management Object was created which invalidated the existing child cursor causing a new child cursor to created for the same parent.

We can see that for the second child, a SQL PLan Baseline was used which was not in the case of the first child.

SQL> select /*TEST_SIMILAR*/  count(*) from myobjects where owner='SYS';

  COUNT(*)
----------
     31129

SQL> select /*TEST_SIMILAR*/  count(*) from myobjects where owner='SYSTEM';

  COUNT(*)
----------
       539

SQL> select /*TEST_SIMILAR*/  count(*) from myobjects where owner='CTXSYS';

  COUNT(*)
----------
       366

SQL> select sql_id,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TEST_SIMILAR*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
68rgrrrpfas96            0          2          1           2
68rgrrrpfas96            1          1          1           1

SQL> select version_count from v$sqlarea where sql_id='68rgrrrpfas96';

VERSION_COUNT
-------------
            2

SQL> select hash_value,address from v$sqlarea where sql_text like 'select /*TEST_SIMILAR*/%';

HASH_VALUE ADDRESS
---------- ----------------
3940901158 0000000096498EA0

SQL> select * from v$sql_shared_cursor where address='0000000096498EA0';

SQL_ID        ADDRESS          CHILD_ADDRESS    CHILD_NUMBER U S O O S L F E B P I S T A B D L T B I I R L I O E M U T N F A I T D L D B P C S C P T M B M R O P M F L P L A F L R L H P B
------------- ---------------- ---------------- ------------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
68rgrrrpfas96 0000000096498EA0 0000000099A374D0            0 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
68rgrrrpfas96 0000000096498EA0 0000000094546818            1 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N Y N N N N N N N N N N N N N N N N N N N N N N

SQL> select child_number,sql_profile,sql_plan_baseline from v$sql where sql_id='7qsnvbzwh79tj';

CHILD_NUMBER SQL_PROFILE                                                      SQL_PLAN_BASELINE
------------ ---------------------------------------------------------------- ------------------------------
           0
           1                                                                  SQL_PLAN_3m43x2gkdvd0vcaeddc3c

SQL> show parameter baseline

NAME                                 TYPE                             VALUE
------------------------------------ -------------------------------- ------------------------------
optimizer_capture_sql_plan_baselines boolean                          TRUE
optimizer_use_sql_plan_baselines     boolean                          TRUE

Why do my execution plans not change after gathering statistics? – A look at Rolling Cursor Invalidations

$
0
0

In releases prior to Oracle 10g, the gathering of statistics using DBMS_STATS resulted in immediate invalidations of dependent cached cursors. This was true unless NO_INVALIDATE parameter of the DBMS_STATS command was set to TRUE.

It was felt that gathering of statistics could actually have a negative impact on performance because of the fact that invalidation of a cached cursor due to gathering of statistics meant that it had to be hard-parsed the next time it is executed.

We know that excessive hard parsing could cause performance spikes because of the CPU usage taken by hard parse operations as well as contention for the library cache and shared pool latches.

In Oracle 10g the default for the NO_INVALIDATE parameter is now AUTO_INVALIDATE.

This means that Oracle will not immediately invalidate the cached cursors on gathering of fresh statistics, but wait for a period of time to elapse first.

This period of time is controlled by the parameter _optimizer_invalidation_period which defaults to a value of 18000 (seconds) or 5 hours.

This parameter specifies when dependent cursors cached in the library cache will get invalidated when statistics are gathered on referenced tables or indexes.

specify when to invalidate dependent cursors i.e. cursors cached in the library cache area of the shared pool which reference a table, index, column or fixed object whose statistics are modified by the procedure call.

So we may find that in some cases even after gathering statistics, the execution plan still remains the same. And it will remain the same unless the 5 hour period of time has elapsed or if the cursor had got invalidated for some other reason or if the cursor had been aged out of the shared pool and was then reloaded.

This reload would then trigger a new hard parse and consequently possibly a new execution plan as well.

Let us look at example of this case.

We create a table TESTME based on the ALL_OBJECTS view.

The data in the table is quite skewed – in my case the table had 18 distinct values for OBJECT_TYPE and there were about 55000 rows. The value SYNONYM for the OBJECT_TYPE column accounted for over 20,000 rows, but there were other values like RULE which just had 1 row.

We created an index on the OBJECT_TYPE column for the test.

SQL> create table testme as select * from all_objects;

Table created.

SQL> create index testme_ind on testme(object_type);

Index created.

Note the current value for NO_INVALIDATE is AUTO which is the default in 10g and above.

SQL> select dbms_stats.get_prefs('NO_INVALIDATE','SYSTEM','MYOBJECTS') from dual
  2  ;

DBMS_STATS.GET_PREFS('NO_INVALIDATE','SYSTEM','MYOBJECTS')
--------------------------------------------------------------------------------
DBMS_STATS.AUTO_INVALIDATE

We ran this SQL statement. Since it will return a single row, we can assume the CBO will choose the index in the execution plan.

SQL> select /*TESTME*/ distinct owner from testme where object_type='RULE';

SQL> select sql_id from v$sql where sql_text like 'select /*TESTME*/%';

SQL_ID
-------------
4a77d3s7xx1mc

SQL> select plan_hash_value,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TESTME*/%';

PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
--------------- ------------ ---------- ---------- -----------
     2890121575            0          1          1           1

We then updated the table and changed the value of OBJECT_TYPE to RULE for 25000 rows – about 50% of the table

SQL> update testme set object_type='RULE' where rownum < 25001;

25000 rows updated.

SQL> commit;

Commit complete.

After this UPDATE statement, we gather fresh statistics

SQL>  exec dbms_stats.gather_table_stats(USER,'TESTME');

PL/SQL procedure successfully completed.

We now run the same query again.

SQL> select /*TESTME*/ distinct owner from testme where object_type='RULE';

Since we are now selecting a high proportion of rows from the table, we would assume that the CBO would have chosen a full table scan instead now in place of the earlier Index scan.

But has the plan changed? – it does not appear so. The executions are now 2, but the plan hash value is still the same.

So looks like the optimizer is still using the old execution plan for this statement . It still considers the table to have only one row for the OBJECT_TYPE value RULE.

The cursor in this case could potentially remain in the shared pool for up to 5 hours with an unchanged execution plan.

SQL> select plan_hash_value,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TESTME*/%';

PLAN_HASH_VALUE CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
--------------- ------------ ---------- ---------- -----------
     2890121575            0          2          1           2

SQL> select * from table(dbms_xplan.display_cursor('4a77d3s7xx1mc',null,null));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
SQL_ID  4a77d3s7xx1mc, child number 0
-------------------------------------
select /*TESTME*/ distinct owner from testme where
object_type=:"SYS_B_0"

Plan hash value: 2890121575

-------------------------------------------------------------------------------------------
| Id  | Operation                    | Name       | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |            |       |       |     3 (100)|          |

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------------------------------------------
|   1 |  HASH UNIQUE                 |            |     1 |    28 |     3  (34)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| TESTME     |     1 |    28 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | TESTME_IND |     1 |       |     1   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Now let us change the value of the parameter _optimizer_invalidation_period to 60

SQL> alter system set "_optimizer_invalidation_period" = 60 scope=memory;  (default is 18000 or 5 hours)

System altered.

We gather statistics again

SQL> exec dbms_stats.gather_table_stats(USER,'TESTME');

After about a minute, execute the same statement again.

SQL> select /*TESTME*/ distinct owner from testme where object_type='RULE';

This time things are different.

We see a new child cursor as the earlier child cursor has got invalidated. The version count has also changed and we see a new plan hash value (which would indicate a changed execution plan as well)

SQL>  select sql_id,child_number,EXECUTIONS,LOADS,PARSE_CALLS from v$sql where sql_text like 'select /*TESTME*/%';

SQL_ID        CHILD_NUMBER EXECUTIONS      LOADS PARSE_CALLS
------------- ------------ ---------- ---------- -----------
4a77d3s7xx1mc            0          2          3           2
4a77d3s7xx1mc            1          1          1           1

SQL> select version_count from v$sqlarea where sql_id='4a77d3s7xx1mc';

VERSION_COUNT
-------------
            2

SQL> select hash_value,address from v$sqlarea where sql_id='4a77d3s7xx1mc';

HASH_VALUE ADDRESS
---------- ----------------
266241644 0000000099A9BAD8

We see that the ROLL_INVALID_MISMATCH column of the V$SQL_SHARED_CURSOR view has a value ‘Y’. This is the reason the cursor has got invalidated.

ROLL_INVALID_MISMATCH indicates that the child had to be created because the original cursor could not be reused due to having been invalidated with rolling invalidation

SQL> select * from v$sql_shared_cursor where address='0000000099A9BAD8';

SQL_ID        ADDRESS          CHILD_ADDRESS    CHILD_NUMBER U S O O S L F E B P I S T A B D L T B I I R L I O E M U T N
------------- ---------------- ---------------- ------------ - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
F A I T D L D B P C S C P T M B M R O P M F L P L A F L R L H P B
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
4a77d3s7xx1mc 0000000099A9BAD8 00000000915908F0            0 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N

4a77d3s7xx1mc 0000000099A9BAD8 000000008A9ED0A0            1 N N N N N N N N N N N N N N N N N N N N N N N N N N N N N N
N N N N N N N N N N N N N N N N N Y N N N N N N N N N N N N N N N

Further reading – Rolling Cursor Invalidations with DBMS_STATS.AUTO_INVALIDATE [ID 557661.1]

Upgrading the 10g E-Business Suite database to 11.2.0.3

$
0
0

As part of the Release 12.1.3 upgrade project, we upgraded the 11.5.10.2 Apps database to 11gR2 (11.2.0.3) from 10.2.0.4.

Best practice is to seperate database upgrade or any platform migration activites from the actual R12 upgrade project. Many are tempted to use the downtime provided by the business for a R12 upgrade to schedule a 10g to 11g database upgrade as well as in some cases a platform migration or 32 bit to 64 bit conversion all in the same downtime.

From the point of view of availability that is great, but by combining all these activities into one task  it is only going to increase the risk as well as complexity of the actual R12 upgrade.

In our case, we were moving from Solaris to Linux platform as well and we performed the platform migration on the 10g database itself.

Read about it here -http://gavinsoorma.com/2012/09/migrating-the-11i-e-business-suite-database-to-linux/

We then pointed the Solaris 11.5.10.2 Apps tier to the 10g database on Linux and then performed another task prior to the R12 upgrade which converting to OATM.

After the OATM was completed, we are now upgrading the Linux 10.2.0.4 database to 11.2.0.3.

Download the note on the 11.2.0.3 database upgrade ….

 

 

 

 

Performing a 32 bit to 64 bit migration using the Transportable Database RMAN feature

$
0
0
This note describes the procedure used to perform a 32 bit to 64 bit conversion of an 11.2.0.3 database on the Linux platform.

The RMAN CONVERT DATABASE command is used to automate the movement of an entire database from one platform (the source platform) to another (the destination platform). 

This is provided that the source and destination platforms are of the same endian format.

For example between Linux X86 32 bit and Linux X86 64 bit.

Note the following:

•	Certain types of blocks, such as blocks in undo segments, need to be reformatted to ensure compatibility with the destination platform.

•	Redo log files and control files from the source database are not transported. New control files and redo log files are created for the new database during the transport process, and an OPEN RESETLOGS is performed once the new database is created

•	BFILEs are not transported. RMAN provides a list of objects using the BFILE datatype in the output for the CONVERT DATABASE command, but users must copy the BFILEs themselves and fix their locations on the destination database.

•	Tempfiles belonging to locally managed temporary tablespaces are not transported. The temporary tablespace will be re-created on the target platform when the transport script is run.

•	External tables and directories are not transported

•	Password files are not transported. If a password file was used with the source database, the output of CONVERT DATABASE includes a list of all usernames and their associated privileges. Create a new password file on the destination database using this information

Download the note ....

Configuring APEX 4.2 and the APEX Listener on Oracle 11g WebLogic Server

$
0
0

This note describes the procedure used to configure the APEX listener (and APEX 4.2) using an existing Oracle 11g (10.3.2) WebLogic server running on a Linux x86 64 platform.

The new Oracle APEX Listener is a J2EE based alternative for Oracle Apache HTTP Server (OHS) and mod_plsql.

The APEX Listener supports deployments using Oracle Web Logic Server (WLS), Oracle Glassfish Server and OC4J

Download apex_4.2_en.zip from the following location:

http://www.oracle.com/technetwork/developer-tools/apex/downloads/index.html

This note is pertaining to the APEX Listener version 1.1.3

APEX 1.1.4 is now available and can be downloaded from:

http://www.oracle.com/technetwork/developer-tools/apex/downloads/index.html

We first need to install APEX 4.2 in the database. This note does not discuss this and assumes that this has already been completed.

Read the note: Configuring the APEX Listener on Oracle 11g WebLogic Server




ASH and AWR Performance Tuning Scripts

$
0
0

Listed below are some SQL queries which I find particularly useful for performance tuning. These are based on the Active Session History V$ View to get a current perspective of performance and the DBA_HIST_* AWR history tables for obtaining performance data pertaining to a period of time in the past.

I would like to add that these queries have been customised by me based on SQL scripts obtained from colleagues and peers. So if I am infringing any copyright material let me know and I shall remove the same. Also If anyone has any similar useful scripts to contribute for use by the community do send it to me and I shall include the same on this page

 

Top Recent Wait Events

col EVENT format a60 

select * from (
select active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history
where active_session_history.event is not null
group by active_session_history.event
order by 2 desc)
where rownum < 6
/

Top Wait Events Since Instance Startup

col event format a60

select event, total_waits, time_waited
from v$system_event e, v$event_name n
where n.event_id = e.event_id
and n.wait_class !='Idle'
and n.wait_class = (select wait_class from v$session_wait_class
 where wait_class !='Idle'
 group by wait_class having
sum(time_waited) = (select max(sum(time_waited)) from v$session_wait_class
where wait_class !='Idle'
group by (wait_class)))
order by 3;

List Of Users Currently Waiting

col username format a12
col sid format 9999
col state format a15
col event format a50
col wait_time format 99999999
set pagesize 100
set linesize 120

select s.sid, s.username, se.event, se.state, se.wait_time
from v$session s, v$session_wait se
where s.sid=se.sid
and se.event not like 'SQL*Net%'
and se.event not like '%rdbms%'
and s.username is not null
order by se.wait_time;

Find The Main Database Wait Events In A Particular Time Interval

First determine the snapshot id values for the period in question.

In this example we need to find the SNAP_ID for the period 10 PM to 11 PM on the 14th of November, 2012.

select snap_id,begin_interval_time,end_interval_time
from dba_hist_snapshot
where to_char(begin_interval_time,'DD-MON-YYYY')='14-NOV-2012'
and EXTRACT(HOUR FROM begin_interval_time) between 22 and 23;

set verify off
select * from (
select active_session_history.event,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from dba_hist_active_sess_history active_session_history
where event is not null
and SNAP_ID between &ssnapid and &esnapid
group by active_session_history.event
order by 2 desc)
where rownum

Top CPU Consuming SQL During A Certain Time Period

Note – in this case we are finding the Top 5 CPU intensive SQL statements executed between 9.00 AM and 11.00 AM

select * from (
select
SQL_ID,
 sum(CPU_TIME_DELTA),
sum(DISK_READS_DELTA),
count(*)
from
DBA_HIST_SQLSTAT a, dba_hist_snapshot s
where
s.snap_id = a.snap_id
and s.begin_interval_time > sysdate -1
and EXTRACT(HOUR FROM S.END_INTERVAL_TIME) between 9 and 11
group by
SQL_ID
order by
sum(CPU_TIME_DELTA) desc)
where rownum

Which Database Objects Experienced the Most Number of Waits in the Past One Hour

set linesize 120
col event format a40
col object_name format a40

select * from 
(
  select dba_objects.object_name,
 dba_objects.object_type,
active_session_history.event,
 sum(active_session_history.wait_time +
  active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history,
    dba_objects
 where 
active_session_history.sample_time between sysdate - 1/24 and sysdate
and active_session_history.current_obj# = dba_objects.object_id
 group by dba_objects.object_name, dba_objects.object_type, active_session_history.event
 order by 4 desc)
where rownum < 6;

Top Segments ordered by Physical Reads

col segment_name format a20
col owner format a10 
select segment_name,object_type,total_physical_reads
 from ( select owner||'.'||object_name as segment_name,object_type,
value as total_physical_reads
from v$segment_statistics
 where statistic_name in ('physical reads')
 order by total_physical_reads desc)
 where rownum

Top 5 SQL statements in the past one hour

select * from (
select active_session_history.sql_id,
 dba_users.username,
 sqlarea.sql_text,
sum(active_session_history.wait_time +
active_session_history.time_waited) ttl_wait_time
from v$active_session_history active_session_history,
v$sqlarea sqlarea,
 dba_users
where 
active_session_history.sample_time between sysdate -  1/24  and sysdate
  and active_session_history.sql_id = sqlarea.sql_id
and active_session_history.user_id = dba_users.user_id
 group by active_session_history.sql_id,sqlarea.sql_text, dba_users.username
 order by 4 desc )
where rownum

SQL with the highest I/O in the past one day

select * from 
(
SELECT /*+LEADING(x h) USE_NL(h)*/ 
       h.sql_id
,      SUM(10) ash_secs
FROM   dba_hist_snapshot x
,      dba_hist_active_sess_history h
WHERE   x.begin_interval_time > sysdate -1
AND    h.SNAP_id = X.SNAP_id
AND    h.dbid = x.dbid
AND    h.instance_number = x.instance_number
AND    h.event in  ('db file sequential read','db file scattered read')
GROUP BY h.sql_id
ORDER BY ash_secs desc )
where rownum

Top CPU consuming queries since past one day

select * from (
select 
	SQL_ID, 
	sum(CPU_TIME_DELTA), 
	sum(DISK_READS_DELTA),
	count(*)
from 
	DBA_HIST_SQLSTAT a, dba_hist_snapshot s
where
 s.snap_id = a.snap_id
 and s.begin_interval_time > sysdate -1
	group by 
	SQL_ID
order by 
	sum(CPU_TIME_DELTA) desc)
where rownum

Find what the top SQL was at a particular reported time of day

First determine the snapshot id values for the period in question.

In thos example we need to find the SNAP_ID for the period 10 PM to 11 PM on the 14th of November, 2012.

select snap_id,begin_interval_time,end_interval_time
from dba_hist_snapshot
where to_char(begin_interval_time,'DD-MON-YYYY')='14-NOV-2012'
and EXTRACT(HOUR FROM begin_interval_time) between 22 and 23;
select * from
 (
select
 sql.sql_id c1,
sql.buffer_gets_delta c2,
sql.disk_reads_delta c3,
sql.iowait_delta c4
 from
dba_hist_sqlstat sql,
dba_hist_snapshot s
 where
 s.snap_id = sql.snap_id
and
 s.snap_id= &snapid
 order by
 c3 desc)
 where rownum < 6 
/

Analyse a particular SQL ID and see the trends for the past day

select
 s.snap_id,
 to_char(s.begin_interval_time,'HH24:MI') c1,
 sql.executions_delta c2,
 sql.buffer_gets_delta c3,
 sql.disk_reads_delta c4,
 sql.iowait_delta c5,
sql.cpu_time_delta c6,
 sql.elapsed_time_delta c7
 from
 dba_hist_sqlstat sql,
 dba_hist_snapshot s
 where
 s.snap_id = sql.snap_id
 and s.begin_interval_time > sysdate -1
 and
sql.sql_id='&sqlid'
 order by c7
 /

Do we have multiple plan hash values for the same SQL ID – in that case may be changed plan is causing bad performance

select 
  SQL_ID 
, PLAN_HASH_VALUE 
, sum(EXECUTIONS_DELTA) EXECUTIONS
, sum(ROWS_PROCESSED_DELTA) CROWS
, trunc(sum(CPU_TIME_DELTA)/1000000/60) CPU_MINS
, trunc(sum(ELAPSED_TIME_DELTA)/1000000/60)  ELA_MINS
from DBA_HIST_SQLSTAT 
where SQL_ID in (
'&sqlid') 
group by SQL_ID , PLAN_HASH_VALUE
order by SQL_ID, CPU_MINS;

Top 5 Queries for past week based on ADDM recommendations

/*
Top 10 SQL_ID's for the last 7 days as identified by ADDM
from DBA_ADVISOR_RECOMMENDATIONS and dba_advisor_log
*/

col SQL_ID form a16
col Benefit form 9999999999999
select * from (
select b.ATTR1 as SQL_ID, max(a.BENEFIT) as "Benefit" 
from DBA_ADVISOR_RECOMMENDATIONS a, DBA_ADVISOR_OBJECTS b 
where a.REC_ID = b.OBJECT_ID
and a.TASK_ID = b.TASK_ID
and a.TASK_ID in (select distinct b.task_id
from dba_hist_snapshot a, dba_advisor_tasks b, dba_advisor_log l
where a.begin_interval_time > sysdate - 7 
and  a.dbid = (select dbid from v$database) 
and a.INSTANCE_NUMBER = (select INSTANCE_NUMBER from v$instance) 
and to_char(a.begin_interval_time, 'yyyymmddHH24') = to_char(b.created, 'yyyymmddHH24') 
and b.advisor_name = 'ADDM' 
and b.task_id = l.task_id 
and l.status = 'COMPLETED') 
and length(b.ATTR4) > 1 group by b.ATTR1
order by max(a.BENEFIT) desc) where rownum < 6;

Upgrading EM12c Release 1 (12.1.0.1) to 12c Release 2 (12.1.0.2)

$
0
0

I recently upgraded EM 12c version 12.1.0.1 to EM 12c Release 2 (12.1.0.2) on a Linux 64 bit platform. Here are few of the things to keep in mind and some notes I made which may be helpful to others planning the similar upgrade.

The 12c Release 2 software is made up of three zip files. Ensure we have between 10-15 GB free in the stage area where we are going to unzip the files.

em12cr2_linux64_disk1.zip
em12cr2_linux64_disk2.zip
em12cr2_linux64_disk3.zip

The 12.1.0.1 to 12.1.0.2 upgrade is an out of place or what is called a 1 system upgrade.

The 1-system upgrade approach upgrades Enterprise Manager 12c Cloud Control on the same host—upgrades Oracle Management Service (OMS) on the same host and Oracle Management Repository (Management Repository) in the existing database. Since the upgrade happens on the same host, there will be downtime involved in the upgrade

Ensure we have at least 10 GB free on the OMS host for the 12c Release 2 Middleware Home. When we upgrade the 12c management agent we also need to ensure that there is a minimum of 1 GB of free disk space in the 12c Release 1 Agent Home.

 

Before the upgrade

 

Ensure that we have take a backup up the 12c Release 1 OMS (the middleware home and the inventory), the Management Repository, and the Software Library.

Ensure that the tables in the Management Repository do not have any snapshots created.

Log in to the Management Repository and run the following SQL query as SYSMAN user:

SQL> select master, log_table from all_mview_logs where log_owner=’SYSMAN’;

no rows selected

If there are snapshots, then drop them by running the following command as SYSMAN user:

SQL> Drop snapshot log on <master> ;

Copy the emkey from the existing OMS to the existing Management Repository

[oracle@kens-oem-prod bin]$ ./emctl config emkey -copy_to_repos
Oracle Enterprise Manager Cloud Control 12c Release 12.1.0.1.0
Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey has been copied to the Management Repository. This operation will cause the EMKey to become unsecure.
After the required operation has been completed, secure the EMKey by running “emctl config emkey -remove_from_repos”.
[oracle@kens-oem-prod bin]$

[oracle@kens-oem-prod bin]$ ./emctl status emkey
Oracle Enterprise Manager Cloud Control 12c Release 12.1.0.1.0
Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
Enter Enterprise Manager Root (SYSMAN) Password :
The EMKey is configured properly, but is not secure. Secure the EMKey by running “emctl config emkey -remove_from_repos”.
[oracle@kens-oem-prod bin]$

Shut down the Management Agent that monitors the Management Services and Repository target

Launch the 12c R2 installer from the location where we have unzipped the 12.1.0.2 software.

 

 

Enter the location of the 12c R2 Middleware and OMS home – so we have to enter a new location since this is a One-System approach

 

At this point we need to stop the OMS.

$ <OMS_HOME>/bin/emctl stop oms

 

Make a note of the current value of the parameter job_queue_processes as the upgrade resets it to 0. After the upgrade check if the value has changed from 0 to the original pre-upgrade value

 

The deployed plug-ins are upgraded if newer versions are available in the Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2.0) software.

If newer versions are not available in the Enterprise Manager Cloud Control 12c Release 2 (12.1.0.2.0) software, then existing deployed plug-ins are carried over without upgrading them.

 

 

Java Development Kit (JDK) 1.6 v24 and Oracle WebLogic Server 11g Release 1 (10.3.5) are installed by the upgrade process if they are not already available in the Middleware home we specify.

A new Oracle WebLogic domain is created using the existing Administration Server configuration details .

It creates a new Oracle Management Service Instance Base directory (gc_inst) for storing all configuration details related to Oracle Management Service 12c R2.

After the Upgrade

 

The installer does NOT upgrade the existing 12.1.0.1 Management Agent that was installed with the OMS. You must upgrade it (and all other Management Agents) using the Upgrade Agents Console.

The ports used by the earlier release of the Management Agents are carried over to the upgraded Management Agents so we do not need to make any changes to our firewall settings as such.

The 12.1.0.1 agent is compatible with the EM 12c Release 2 OMS. So we can still continue using the 12.1.0.1 management agents with the 12.1.0.2 OMS, but a preferred option would be to upgrade the existing 12c Release 1 agents.

After the upgrade we can see that an additional item has been added to the SetUp menu which is Manage Cloud Control.

From the Manage Cloud Control menu option we select Upgrade Agents.

 

We can now see a list of all our existng 12.1.0.1 agents.

Select the agent we would like to upgrade. In this case Privilege Delefation setting has not been setup for this particular host so we need to run the root.sh manually.

Note that after the upgrade a another directory called 12.1.0.2.0 is created under the AGENT_HOME base/core.

To start and stop the 12c R2 agent we need to ensure that we are now using the emctl executable located under for example:

/u01/app/agent12c/core/12.1.0.2.0/bin

 

[root@kens-oem-prod 12.1.0.2.0]# pwd
/u01/app/em12c/Middleware/agent/core/12.1.0.2.0

[root@kens-oem-prod 12.1.0.2.0]# ./root.sh
Finished product-specific root actions.
/etc exist
Finished product-specific root actions.
[root@kens-oem-prod 12.1.0.2.0]#

[root@kens-orasql-001-dev 12.1.0.2.0]# ./replacebins.sh
Replaced sbin executables

[oracle@kens-oem-prod bin]$ ./emctl status agent

Oracle Enterprise Manager Cloud Control 12c Release 2
Copyright (c) 1996, 2012 Oracle Corporation. All rights reserved.
—————————————————————
Agent Version : 12.1.0.2.0
OMS Version : 12.1.0.2.0
Protocol Version : 12.1.0.1.0
Agent Home : /u01/app/em12c/Middleware/agent/agent_inst
Agent Binaries : /u01/app/em12c/Middleware/agent/core/12.1.0.2.0
Agent Process ID : 6725
Parent Process ID : 6673
Agent URL : https://kens-oem-prod.corporateict.domain:3872/emd/main/
Repository URL : https://kens-oem-prod.corporateict.domain:4900/empbs/upload
Started at : 2012-11-13 22:33:30
Started by user : oracle
Last Reload : 2012-11-13 22:33:53
Last successful upload : 2012-11-13 22:36:06
Last attempted upload : 2012-11-13 22:36:06
Total Megabytes of XML files uploaded so far : 0.05
Number of XML files pending upload : 0
Size of XML files pending upload(MB) : 0
Available disk space on upload filesystem : 13.80%
Collection Status : Collections enabled
Heartbeat Status : Ok
Last attempted heartbeat to OMS : 2012-11-13 22:35:54
Last successful heartbeat to OMS : 2012-11-13 22:35:54
Next scheduled heartbeat to OMS : 2012-11-13 22:36:54

—————————————————————
Agent is Running and Ready

Oracle GoldenGate 11gR2 high availability with Oracle 11gR2 Clusterware

$
0
0

This note discusses how to achieve high availability in an Oracle 11g GoldenGate environment using Oracle 11gR2 Grid Infrastructure Clusterware services.

The GoldenGate manager process has some configuration parameters which we can set to prevent outages like AUTOSTART and AUTORESTART which will ensure that GoldenGate Extract and Replicat processes will get restarted in the event of a failure – but we need to note that for this to happen, the Manager process needs to be up and running.

But what happens if the manager process itself fails as a result of a server or OS crash or network failure or database crash?

So let us look at how to provide high availability for the manager process in a clustered environment.

In this example we are using GoldenGate in a two-node 11gR2 RAC cluster with ASM storage configured.

In a two-node RAC cluster configuration, Oracle GoldenGate runs on only one server at any given time. If that server goes down, GoldenGate is restarted automatically by CRS on the other available node.

While we can install GoldenGate on both nodes, we need to ensure that the checkpoint files which are located in the dirchk directory and the trail files are residing in a location that is accessible from any node as well as we need to ensure that the parameter files are identical in both nodes of the cluster.

So in our case we are using ACFS which is the Automatic Storage Management (ASM) Cluster File System which acts as ahared storage location and we need to install GoldenGate just once in this shared storage location and the same can then be accessed fron either node in the cluster as the ACFS is mounted on both nodes in the cluster.

In addition to configuring the ACFS, we also need to obtain an unused IP address on the public subnet whch is registered in DNS – this is the VIP or the Virtual IP address. In the event of a node failure, Oracle Clusterware will migrate the VIP to a surviving node in the cluster.

 

Have a read of the Oracle white paper on the subject – I have followed that in setting up the GoldenGate clusterware.

Oracle White Paper—Oracle GoldenGate high availability with Oracle Clusterware
 

Let us now take a look at the steps involved.

 

Install the GoldenGate software on the shared location

The first thing we are doing here is to setup and configure ACFS and install the 11gR2 GoldenGate software in this shared location.

In our case, we have a mountpoint /goldengate which is the ASM Cluster File System and this file system is the shared location for the GoldenGate software mounted on both nodes in the cluster.

In short create the ASM disk group, then the volume and finally the ASM Cluster File System.

Install the GoldenGate 11gR2 software then in this location. As this is a  shared location, we need to install GoldenGate just once .

 

 

 

 

 

Create the application VIP

 

From the GRID_HOME/bin directory run the appvipcfg command to create the application VIP. Oracle Clusterware assigns this VIP to a physical server in the cluster and will migrate the VIP to a surviving node in the cluster in the event of a server failure.

As root:

[root@mycorp-racnode1 bin]# ./appvipcfg create -network=1 -ip=10.50.20.52 -vipname=mycorp-oragg-vip -user=root
Production Copyright 2007, 2008, Oracle.All rights reserved
2012-11-28 03:21:48: Skipping type creation
2012-11-28 03:21:48: Create the Resource
2012-11-28 03:21:48: Executing /u01/app/11.2.0.3/grid/bin/crsctl add resource mycorp-oragg-vip -type app.appvip_net1.type -attr "USR_ORA_VIP=10.6.20.52,START_DEPENDENCIES=hard(ora.net1.network) pullup(ora.net1.network),STOP_DEPENDENCIES=hard(ora.net1.network),ACL='owner:root:rwx,pgrp:root:r-x,other::r--,user:root:r-x',HOSTING_MEMBERS=mycorp-racnode1,APPSVIP_FAILBACK="
2012-11-28 03:21:48: Executing cmd: /u01/app/11.2.0.3/grid/bin/crsctl add resource mycorp-oragg-vip -type app.appvip_net1.type -attr "USR_ORA_VIP=10.6.20.52,START_DEP
We also have to allow permissions to the Grid Infrastructure owner (grid) and Oracle database and GoldenGate software owner (oracle) to run the script which starts the VIP.

As root:

[root@mycorp-racnode1 bin]# ./crsctl setperm resource mycorp-oragg-vip -u user:oracle:r-x
[root@mycorp-racnode1 bin]# ./crsctl setperm resource mycorp-oragg-vip -u user:grid:r-x

As oracle, start the application VIP via the crsctl start resource command from the GRID_HOME/bin

[oracle@mycorp-racnode1 bin]# ./crsctl start resource mycorp-oragg-vip
CRS-2672: Attempting to start 'mycorp-oragg-vip' on 'mycorp-racnode2'
CRS-2676: Start of 'mycorp-oragg-vip' on 'mycorp-racnode2' succeeded

We can now verify whether VIP is running and on which node it is running via the crsctl status resource command

[oracle@mycorp-racnode1 bin]# ./crsctl status resource mycorp-oragg-vip
NAME=mycorp-oragg-vip
TYPE=app.appvip_net1.type
TARGET=ONLINE
STATE=ONLINE on mycorp-racnode2

From another node in the cluster we should now be able to ping the VIP’s IP address.

 

Create the Agent script

The agent script is used by Oracle Clusterware to check if the manager process is running and also to stop and start the manager as the case may be.

There is a sample script which is available in the appendix of the Oracle white paper (link mentioned ablove) which can be modified accordingly.

Here is the script I have used in my case. You will need to change the location of the GGS_HOME, CRS_HOME, ORACLE_HOME, LD_LIBRARY_PATH,  and if you are using ASM, the ASMPASSWORD value which is embedded in the agent script.

 

This script is called 11gr2_gg_action.sh and is copied to the /goldengate directory location so that it is accessible from either node in the cluster.

 

11gr2_gg_action.sh

 

Register a resource in Oracle Clusterware

 

We run the crsctl add resource command to register Oracle GoldenGate as a resource in Oracle Clusterware. The resource name in this case is ggate

 

[oracle@mycorp-racnode1 bin]# ./crsctl add resource ggate -type cluster_resource \ 
-attr "ACTION_SCRIPT=/goldengate/11gr2_gg_action.sh,CHECK_INTERVAL=30, \ 
START_DEPENDENCIES='hard(mycorp-oragg-vip,ora.asm) pullup(mycorp-oragg-vip)', STOP_DEPENDENCIES='hard(mycorp-oragg-vip)'"

The START_DEPENDENCIES and STOP_DEPENDENCIES indicates that the VIP and the ggate resource should always start and stop together.
Since in our case the Oracle GoldenGate software owner is not the same as the Grid Infrastructure owner, we need to run the crsctl setperm command to set the
ownership of the application to the GoldenGate software owner

As root:
[root@mycorp-racnode1 bin]# ./crsctl setperm resource ggate -o oracle

Start the Application   We can now use Oracle Clusterware to start GoldenGate. We connect as oracle and run the crsctl start resourcecommand from the GRID_HOME/bin.

[oracle@mycorp-racnode1 bin]$ ./crsctl start resource ggate
CRS-2672: Attempting to start 'ggate' on 'mycorp-racnode1'
CRS-2676: Start of 'ggate' on 'mycorp-racnode1' succeeded

We can use the crsctl status resource command to verify the state of the resource and which node in the cluster it is currently running on.

[oracle@mycorp-racnode1 bin]$ ./crsctl status resource ggate
NAME=ggate
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on mycorp-racnode1

If we connect to GoldenGate, we can see that the manager process has been started up automatically.
GGSCI (mycorp-racnode1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER RUNNING 
 I found that if the manager process was already running and then we go and issue the crsctl start resource command, we got the following error message:
 
[oracle@mycorp-racnode1 bin]$ ./crsctl start resource ggate
CRS-2672: Attempting to start 'ggate' on 'mycorp-racnode1'
CRS-2674: Start of 'ggate' on 'mycorp-racnode1' failed
CRS-2679: Attempting to clean 'ggate' on 'mycorp-racnode1'
CRS-2681: Clean of 'ggate' on 'mycorp-racnode1' succeeded
CRS-2527: Unable to start 'ggate' because it has a 'hard' dependency on 'mycorp-oragg-vip'
CRS-2525: All instances of the resource 'mycorp-oragg-vip' are already running; relocate is not allowed because the force option was not specified
CRS-4000: Command Start failed, or completed with errors.

What happens if we manually stop the manager process?
Oracle Clusterware starts it up!

GGSCI (mycorp-racnode1) 2> stop manager
Manager process is required by other GGS processes.
Are you sure you want to stop it (y/n)? y

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

GGSCI (mycorp-racnode1) 3> quit
[oracle@mycorp-racnode1 goldengate]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI (mycorp-racnode1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

While GoldenGate is running we can test the failover process by relocating it to another node in the cluster via the crsctl relocate resource command

Note that we have to use the -f (force) option because it is already running when we are trying to manually relocate it to another node.

[oracle@mycorp-racnode1 bin]$ ./crsctl relocate resource ggate -f
CRS-2673: Attempting to stop 'ggate' on 'mycorp-racnode1'
CRS-2677: Stop of 'ggate' on 'mycorp-racnode1' succeeded
CRS-2673: Attempting to stop 'mycorp-oragg-vip' on 'mycorp-racnode1'
CRS-2677: Stop of 'mycorp-oragg-vip' on 'mycorp-racnode1' succeeded
CRS-2672: Attempting to start 'mycorp-oragg-vip' on 'mycorp-racnode2'
CRS-2676: Start of 'mycorp-oragg-vip' on 'mycorp-racnode2' succeeded
CRS-2672: Attempting to start 'ggate' on 'mycorp-racnode2'
CRS-2676: Start of 'ggate' on 'mycorp-racnode2' succeeded

We noe can see that the Manager process has been started up on the second node and via the crsctl status resource command we can verify that the ggate resource is now running on the second node in the cluster (mycorp-racnode2).

GGSCI (mycorp-racnode2) 3> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING

[oracle@mycorp-racnode1 bin]$ ./crsctl status resource ggate
NAME=ggate
TYPE=cluster_resource
TARGET=ONLINE
STATE=ONLINE on mycorp-racnode2

What happens if we kill the Manager process?

[grid@mycorp-racnode1 goldengate]$ ps -ef |grep mgr
grid     11796     1  0 05:41 ?        00:00:00 ./mgr PARAMFILE /goldengate/dirprm/mgr.prm REPORTFILE /goldengate/dirrpt/MGR.rpt PROCESSID MGR PORT 7809
grid     12763 12669  0 05:50 pts/0    00:00:00 grep mgr

[grid@mycorp-racnode1 goldengate]$ kill -9 11796

As soon as I killed the Manager process from the OS, I quickly checked the status of the Manager as well as the Extract process which was running at the time I killed the process from the OS.

Note the status of both the processes and the how the status quickly changes.

GGSCI (mycorp-racnode1) 1> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED
EXTRACT     RUNNING     EXT1        00:00:00      00:00:02

GGSCI (mycorp-racnode1) 2> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     ABENDED     EXT1        00:00:00      00:00:06

GGSCI (mycorp-racnode1) 3> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     EXT1        00:00:00      00:00:10

GoldenGate Integrated Capture Mode

$
0
0

One of the new features in GoldenGate 11g is the Integrated Capture mode.

In the earlier classic capture mode, the Oracle GoldenGate Extract process captures data changes from the Oracle redo or archive log files on the source system.

In integrated capture mode, the Oracle GoldenGate Extract process interacts directly with the database log mining server which mines or reads the database redo log files and captures the changes in the form of Logical Change Records (LCR’s) which are from there written to the GoldenGate trail files.

The basic difference is that in the Integrated Capture mode, the extract process does not directly read the redo log files. That part of the job is done by the logmining server residing in the Oracle database.

Integrated capture supports more data types as well as compressed data and as it is fully integrated with the database there is no additional setup steps required when we are configuring GoldenGate with things like RAC, ASM and TDE (Transparent Data Encryption)

In the integrated capture mode there are two deployment options:

a) Local deployment
b) Downstream deployment

Basically it depends on where the log mining server is deployed.

In the Local deployment, the source database and the log mining server are the same database

In downstream deployment, the source and log mining databases are different databases. The source database uses redo transport to ship the archived redo log files to the ‘downstream’ database where the log mining server is residing. The log mining server extracts changes in the form of logical change records and these are then processed by GoldenGate and written to the trail files.

So in the downstream integrated capture mode, we offload any overhead associated with the capture or transformation from the source database to the downstream database which may be used only for GoldenGate processing and not for any production user connections.

In this example we will look at the setup of integrated capture local deployment and in the next post we will look at a downstream integrated capture model.

Database setup for Integrated Capture

We need to keep in mind the point that for full integrated capture support of all Oracle data and storage types, the compatibility setting of the source database must be at least 11.2.0.3.

Also, we need to apply the database patch 14551959 using opatch. Read the MOS note 1411356.1 for full details

After applying the patch 14551959 (the database and listener need to down to apply this patch) using opatch, we also need to do some post install steps as mentioned in the README.txt.

We need to start the database and run the postinstall.sql located in the patch directory.

This is to be followed by granting certain privileges to the GoldenGate database user account via the package
DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE as shown below. In this case the database user is ‘ggate’.

EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE( -
grantee => 'ggate', -
privilege_type => 'capture', grant_select_privileges=> true, do_grants => TRUE);

If the patch is not applied or the privileges not granted, we can expect to see an error like the one shown below:

2013-01-24 17:30:24 ERROR OGG-02021 This database lacks the required libraries to support integrated capture.

What’s different to the classic capture setup?

When we add the extract we have to use the INTEGRATED CAPTURE clause in the ADD EXTRACT command as shown below

ADD EXTRACT intext INTEGRATED TRANLOG, BEGIN NOW

In the extract parameter file we have to use TRANLOGOPTIONS INTEGRATEDPARAMS parameter as show below

TRANLOGOPTIONS INTEGRATEDPARAMS (max_sga_size 200, parallelism 1)

The max_sga_size is denoted in MB and this memory is taken from the streans_pool_size part of the SGA memory. If the streams_pool_size is greater than 1 GB, max_sga_size defaults to 1 GB, otherwise it is 75% of the streans_pool_size

To test this I set the max_sga_size to 200 MB and the streans_pool_size was also 200 MB.

This error was noticed and the extract abended.

2013-01-24 17:59:42 ERROR OGG-02050 Not enough database memory to honor requested MAX_SGA_SIZE of 200.
2013-01-24 17:59:42 ERROR OGG-01668 PROCESS ABENDING.

We had to set the max_sga_size in this case to 150 and then the extract started.

The parallelism specifies the number of processes supporting the database log mining server. It defaults to 2

Register the extract

We use the REGISTER EXTRACT command to register the primary extract group with the Oracle database. The extract process does not directly read the redo log files as in the classic capture mode, but integrates with the datavase log mining server to receive changes in the form of Logical Change Records or LCR’s.

We do this before adding the extract and must connect to the database first via the DBLOGIN command

GGSCI> DBLOGIN USER dbuser PASSWORD dbpasswd
GGSCI> REGISTER EXTRACT ext1 DATABASE

Example


In this case we are creating the extract group intext and the extract datapump group intdp. We will be replicating the SH.customers table using the integrated capture mode.

GGSCI (pdemvrhl061) 1> DBLOGIN USERID ggate, PASSWORD ggate
Successfully logged into database.
GGSCI (pdemvrhl061) 2> REGISTER EXTRACT intext DATABASE
2013-01-24 17:58:28 WARNING OGG-02064 Oracle compatibility version 11.2.0.0.0 has limited datatype support for integrated capture. Version 11.2.0.3 required for full support.
2013-01-24 17:58:46 INFO OGG-02003 Extract INTEXT successfully registered with database at SCN 1164411.
GGSCI (pdemvrhl061) 1> ADD EXTRACT intext INTEGRATED TRANLOG, BEGIN NOW
EXTRACT added.
GGSCI (pdemvrhl061) 3> ADD EXTTRAIL /u01/app/ggate/dirdat/lt, EXTRACT intext
EXTTRAIL added.
GGSCI (pdemvrhl061) 4> ADD EXTRACT intdp EXTTRAILSOURCE /u01/app/ggate/dirdat/lt
EXTRACT added.
GGSCI (pdemvrhl061) 5> ADD RMTTRAIL /u01/app/ggate/dirdat/rt, EXTRACT intdp
RMTTRAIL added.
GGSCI (pdemvrhl061) 6> EDIT PARAMS intext
EXTRACT intext
USERID ggate, PASSWORD ggate
TRANLOGOPTIONS INTEGRATEDPARAMS (MAX_SGA_SIZE 100)
EXTTRAIL /u01/app/ggate/dirdat/lt
TABLE sh.customers;
GGSCI (pdemvrhl061) 7> EDIT PARAMS intdp
EXTRACT intdp
USERID ggate, PASSWORD ggate
RMTHOST 10.xx.206.xx, MGRPORT 7809
RMTTRAIL /u01/app/ggate/dirdat/rt
TABLE sh.customers ;
GGSCI (pdemvrhl061) 7> start extract intext
Sending START request to MANAGER ...
EXTRACT INTEXT starting
GGSCI (pdemvrhl061) 8> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING INTDP 00:00:00 00:00:05
EXTRACT RUNNING INTEXT 01:17:18 00:00:04
On the target site, start the Replicat process.
GGSCI (pdemvrhl062) 4> START REPLICAT rep1
Sending START request to MANAGER ...
REPLICAT REP1 starting
GGSCI (pdemvrhl062) 5> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING REP1 00:00:00 00:00:06

In the background ….

When we register the extract, we will see that a capture process called OGG$CAP_INTEXT was created and a queue called OGG$Q_INTEXT was created in the GGATE schema.

A good source of information is also the database alert log and we can see messages like the ones shown below:

LOGMINER: session#=1 (OGG$CAP_INTEXT), reader MS00 pid=41 OS id=32201 sid=153 started
Thu Jan 24 18:04:15 2013
LOGMINER: session#=1 (OGG$CAP_INTEXT), builder MS01 pid=42 OS id=32203 sid=30 started
Thu Jan 24 18:04:15 2013
LOGMINER: session#=1 (OGG$CAP_INTEXT), preparer MS02 pid=43 OS id=32205 sid=155 started
Thu Jan 24 18:04:16 2013

LOGMINER: Begin mining logfile for session 1 thread 1 sequence 12, /u01/oradata/testdb1/redo03.log
LOGMINER: End mining logfile for session 1 thread 1 sequence 12, /u01/oradata/testdb1/redo03.log

Read further

GoldenGate Integrated Capture Healthcheck Script [Article ID 1448324.1]

Advisor Webcast : Extracting Data in Oracle GoldenGate Integrated Capture Mode (MOS note 740966.1)

GoldenGate Integrated Capture using downstream mining database

$
0
0

In my earlier post, we had discussed the GoldenGate 11g Integrated Capture feature using the local deployment model.

 Let us now look at the Downstream Capture deployment model of the Integrated Capture mode.

 It should be noted that the main difference in the Integrated Capture mode and the Classic Capture mode is that the Extract process no longer reads the online (or archive) redo log files of the Oracle database, but this task is performed by the database logmining server which reads the changes in the form of Logical Change Records (LCR’s) and these are then accessed by the Extract process which writes to the GoldenGate trail files.

 Where the logmining server resides is the difference in the local and downstream deployment model of Integrated Capture.

 In the local deployment, the source database and the mining database are the same.

 In downstream deployment, the source database and mining database are different databases and the logmining server resides in the downstream database. We configure redo transport (similar to what we do in Data Guard) and logs are shipped over the network from the source database to the downstream database.  The logmining server in the downstream database which extract changes from the redo log (or archive) files in the form of logical change records which are then passed onto the GoldenGate extract process.

 Since the logmining activity imposes additional overhead on the database where it is running because it adds additional processes as well as consumes memory from the SGA, it is beneficial to offload this processing from the source database to the downstream database.

 We can configure the downstream database to be the same database as your target database or we could have an additional downstream database in addition to the target database.

 However, do keep in mind that the Oracle database version and platform of the source and target database need to be the same in the downstream deployment model of Integrated Capture.

Setup and Configuration

 Source Database

  •  Create the source database user account whose credentials Extract will use to fetch data and metadata from the source database. This user can be the same user we created when we setup and configured GoldenGate.
  •  Grant the appropriate privileges for Extract to operate in integrated capture mode via the dbms_goldengate_auth.grant_admin_privilege procedure (11.2.0.3 and above)
  •  Grant  select on v$database to that same user
  •  Configure Oracle Net so that the source database can communicate with the downstream database (like Data Guard)
  •  Create the password file and copy the password file to the $ORACLE_HOME/dbs location on the server hosting the downstream database. Note that the password file must be the same at all source databases, and at the mining database.
  •  Configure one LOG_ARCHIVE_DEST_n initialization parameter to transmit redo data to the downstream mining database.
  •  At the source database (as well as the downstream mining database), set the DG_CONFIG attribute of the LOG_ARCHIVE_CONFIG initialization parameter to include the DB_UNIQUE_NAME of the source database and the downstream database

 

Downstream Database

  •  Create the database user account on the downstream database. The Extract process will use these credentials to interact with the downstream logmining server. We can use the same user which we had created when we setup and configured GoldenGate on the target database (if the target database and downstream database are the same).
  •  Grant the appropriate privileges for the downstream mining user to operate in ntegrated capture mode by executing the dbms_goldengate_auth.grant_admin_privilege procedure.
  •  Grant SELECT on v$database to the same downstream mining user
  •  Downstream database must be running in ARCHIVELOG mode and we should configure archival of local redo log files if we want to run Extract in real-time integrated capture mode. Use the LOG_ARCHIVE_DEST_n parameter as shown in the example.
  •  Create Standby redo log files (same size as online redo log files and number of groups should be one greater than existing online redo log groups)
  •  Configure the database to archive standby redo log files locally that receive redo data from the online redo logs of the source database. Use the LOG_ARCHIVE_DEST_n parameter as shown in the example.

Some new GoldenGate Parameters related to Downstream Integrated Capture

MININGDBLOGIN – Before registering the extract we have to connect to the downstream logmining database with the appropriate database login credentials

TRANLOGOPTIONS MININGUSER ggate@testdb2 MININGPASSWORD ggate – specify this in the downstream extract parameter file

TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y) – specify this in the downstream extract parameter file and required for real time capture

Example

This example illustrates real-time Integrated Capture so we have to configure standby log files as well.

 The source database is testdb1 and the downstream/target database is testdb2

 The database user account is GGATE in both the source as well  as downstream/target database

 We have setup and tested Oracle Net connectivity from source to downstream/target database. In this case we have setup TNS aliases testdb1 and testdb2 in the tnsnames.ora file on both servers

 Source Database (testdb1)

Grant privileges

SQL>  EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'ggate', privilege_type => 'capture',  grant_select_privileges=> true, -
      do_grants => TRUE);

PL/SQL procedure successfully completed.

SQL> GRANT SELECT ON V_$DATABASE TO GGATE;

Grant succeeded.

Configure Redo Transport

 SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2='SERVICE=testdb2 ASYNC NOREGISTER VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=testdb2';

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

System altered.

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb1,testdb2)';

System altered.

Downstream Database

 Grant Privileges

SQL>  EXEC DBMS_GOLDENGATE_AUTH.GRANT_ADMIN_PRIVILEGE(grantee => 'ggate', privilege_type => 'capture',  grant_select_privileges=> true, -
      do_grants => TRUE);

PL/SQL procedure successfully completed.

SQL> GRANT SELECT ON V_$DATABASE TO GGATE;

Grant succeeded.

Prepare the mining database to archive its local redo

 SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_1='LOCATION=/u01/oradata/testdb2/arch_local VALID_FOR=(ONLINE_LOGFILE,PRIMARY_ROLE)';

System altered.

Create Standby log files

 SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 '/u01/oradata/testdb2/standby_redo04.log' SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 '/u01/oradata/testdb2/standby_redo5.log' SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 '/u01/oradata/testdb2/standby_redo06.log'  SIZE 50M;

Database altered.

SQL> ALTER DATABASE ADD STANDBY LOGFILE GROUP 7 '/u01/oradata/testdb2/standby_redo07.log' SIZE 50M;

Database altered.

Prepare the mining database to archive redo received in standby redo logs from the source database

 SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_2=’LOCATION=/u01/oradata/testdb2/arch_remote VALID_FOR=(STANDBY_LOGFILE,PRIMARY_ROLE)’;

 System altered.

 SQL> ALTER SYSTEM SET LOG_ARCHIVE_DEST_STATE_2=ENABLE;

 System altered.

 Set DG_CONFIG at the downstream mining database

SQL> ALTER SYSTEM SET LOG_ARCHIVE_CONFIG='DG_CONFIG=(testdb1,testdb2)';

 System altered.

Setup Integrated Capture Extract Process (myext)

 [oracle@pdemvrhl062 ggate]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 11.2.1.0.3 14400833 OGGCORE_11.2.1.0.3_PLATFORMS_120823.1258_FBO
Linux, x64, 64bit (optimized), Oracle 11g on Aug 23 2012 20:20:21

Copyright (C) 1995, 2012, Oracle and/or its affiliates. All rights reserved.

GGSCI (pdemvrhl062) 1> DBLOGIN USERID ggate@testdb1 PASSWORD ggate
Successfully logged into database.

GGSCI (pdemvrhl062) 2> MININGDBLOGIN USERID ggate, PASSWORD ggate
Successfully logged into mining database.

GGSCI (pdemvrhl062) 5> REGISTER EXTRACT myext DATABASE

2013-01-31 18:02:02  WARNING OGG-02064  Oracle compatibility version 11.2.0.0.0 has limited datatype support for integrated capture. Version 11.2.0.3 required for full support.

2013-01-31 18:03:12  INFO    OGG-02003  Extract MYEXT successfully registered with database at SCN 2129145.

GGSCI (pdemvrhl062) 6> ADD EXTRACT myext INTEGRATED TRANLOG BEGIN NOW
EXTRACT added.

GGSCI (pdemvrhl062) 7> ADD EXTTRAIL /u01/app/ggate/dirdat/ic , EXTRACT myext
EXTTRAIL added.

GGSCI (pdemvrhl062) 8> EDIT PARAMS myext

EXTRACT myext
USERID ggate@testdb1, PASSWORD ggate
TRANLOGOPTIONS MININGUSER ggate@testdb2 MININGPASSWORD ggate
TRANLOGOPTIONS INTEGRATEDPARAMS (downstream_real_time_mine Y)
EXTTRAIL /u01/app/ggate/dirdat/ic
TABLE sh.customers;

Create the Replicat process (myrep)

GGSCI (pdemvrhl062) 14> ADD REPLICAT myrep EXTTRAIL /u01/app/ggate/dirdat/ic
REPLICAT added.

GGSCI (pdemvrhl062) 17> EDIT PARAMS myrep

REPLICAT myrep
ASSUMETARGETDEFS
USERID ggate, PASSWORD ggate
MAP sh.customers, TARGET sh.customers;

Start the Extract and Replicat processes

GGSCI (pdemvrhl062) 19> info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING
EXTRACT     RUNNING     MYEXT       00:00:00      00:00:03
REPLICAT    RUNNING     MYREP       00:00:00      00:00:03

Test – On source database update rows of the CUSTOMERS table

 

SQL> update customers set cust_city='SYDNEY';

55500 rows updated.

SQL> commit;

Commit complete.

On target database confirm the update statement has been replicated

[oracle@pdemvrhl062 ggate]$ sqlplus sh/sh

SQL*Plus: Release 11.2.0.3.0 Production on Thu Jan 31 18:39:41 2013

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:

Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select distinct cust_city from customers;

CUST_CITY

------------------------------

SYDNEY

Check the statistics of the downstream Extract myext

GGSCI (pdemvrhl062) 23> stats extract myext

Sending STATS request to EXTRACT MYEXT ...

Start of Statistics at 2013-01-31 18:37:54.

Output to /u01/app/ggate/dirdat/ic:

Extracting from SH.CUSTOMERS to SH.CUSTOMERS:

*** Total statistics since 2013-01-31 18:37:07 ***
        Total inserts                                      0.00
        Total updates                                  55500.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                               55500.00
Viewing all 232 articles
Browse latest View live


Latest Images