Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

Convert Single Instance Database to RAC Using rconfig

$
0
0

Oracle RAC How-To Series – Tutorial 9

Download the note (for members only…)

Tutorial 9

You need to be logged in to see this part of the content. Please Login to access.

Convert RAC to RAC One Node and RAC One Node to RAC

$
0
0

Oracle RAC How-To Series – Tutorial 10

Download the note (for members only…)

Tutorial 10

You need to be logged in to see this part of the content. Please Login to access.

Adding and Deleting a Node From a RAC Cluster

$
0
0

Oracle RAC How-To Series – Tutorial 11

Download the note (for members only…)

Tutorial 11

You need to be logged in to see this part of the content. Please Login to access.

DNS and DHCP setup for 12c R2 Grid Infrastructure installation with Grid Naming Service (GNS)

$
0
0

Oracle RAC How-To Series – Tutorial 12

Download the note (for members only…)

Tutorial 12

You need to be logged in to see this part of the content. Please Login to access.

18c Grid Infrastructure Upgrade

$
0
0

Oracle RAC How-To Series – Tutorial 13

Download the note (for members only…)

Tutorial 13

You need to be logged in to see this part of the content. Please Login to access.

Oracle 12c Clusterware Post-installation and Configuration Verification

$
0
0

Oracle RAC How-To Series – Tutorial 14

Download the note (for members only…)

Tutorial 14

You need to be logged in to see this part of the content. Please Login to access.

July 2018 PSU Oracle Grid Infrastructure 12c Release 2

$
0
0

Oracle RAC How-To Series – Tutorial 15

Download the note (for members only…)

Tutorial 15

You need to be logged in to see this part of the content. Please Login to access.

How to create the Linux 6.8 VM’s on VirtualBox for the Oracle RAC 12c Workshop

$
0
0

Oracle RAC How-To Series – Tutorial 16

Download the note (for members only…)

Tutorial 16

You need to be logged in to see this part of the content. Please Login to access.

Exadata Online Training

$
0
0

The fourth edition of the highly popular “Oracle Exadata Essentials for Oracle DBA’s” online training course will be commencing Sunday 4th November.

This hands-on training course will teach you how to install and configure an Exadata Storage Server Cell on your own individual Oracle VirtualBox platform as well as prepare you for the Oracle Certified Expert, Oracle Exadata X5 Administrator exam (1Z0-070).

The classes will be from 09.30 AM US EST till 13.30 PM and entire session recordings are available in case a session is missed as well as for future reference.

The cost of the 4 week online hands-on training is $699.00, and the course curriculum is based on the Exadata Database Machine: 12c Administration Workshop course offered by Oracle University which costs over USD $5000!

Book your seat for this training course via the registration link below:

Register for Exadata Essentials …

In addition to the topics listed below, attendees will learn how to use CELLCLI to create and manage cell disks, grid disks and flash disks as well as how to configure alerts and monitoring of storage cells on their own individual Exadata Storage Server environments.

• Install Exadata Storage Server software and create storage cells on a VirtualBox platform
• Exadata Database Machine Components & Architecture
• Exadata Database Machine Networking
• Smart Scans and Cell Offloading
• Storage Indexes
• Smart Flash Cache and Flash Logging
• Exadata Hybrid Columnar Compression
• I/O Resource Management (IORM)
• Exadata Storage Server Configuration
• Database File System
• Migration to Exadata platform
• Storage Server metrics and alerts
• Monitoring Exadata Database Machine using OEM
• Applying a patch to an Exadata Database Machine
• Automatic Support Ecosystem
• Exadata Cloud Service overview

…. and more!

Here is some of the feedback received from the attendees of earlier training sessions:

Oracle 18c RPM Based Software Installation

$
0
0

One of the (many) new features in Oracle Database 18c enables us to install a single-instance Oracle Database software (no support for Grid Infrastructure as yet) using an RPM package.

So say as part of say provisioning a new Linux server, the system administrator can also provide the Oracle 18c software already pre-installed and ready to be used by the DBA.

Note that the  RPM-based Oracle Database installation is not available for Standard Edition 2. Standard Edition 2 support is planned in the next release 19c.

The naming convention for RPM packages is nameversionrelease.architecture.rpm.

Currently the RPM for 18c is : oracle-database-ee-18c-1.0-1.x86_64.rpm

So we can see that this RPM is for 18c Enterprise Edition (ee-18c), the version number (1.0) , release number of the package (1) and the platform architecture (x86_64).

To install the 18c database software we will do the following :

  • Connect as root and download and install the 18c pre-installation RPM using the yum install command
  • Download the 18c Oracle Database RPM-based installation software from OTN or the Oracle Software Delivery Cloud portal (Edelivery).
  • Install the database software using the yum localinstall command

Once the 18c software has been installed, we can run a script as root ( /etc/init.d/oracledb_ORCLCDB-18c configure) which will automatically create a Container Database (ORCLDB) with a Pluggable Database (ORCLPDB1) as well as configure and start  the listener!!

 

How to install the Oracle 18c RPM-based database software

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

Oracle 18c Pluggable Database Switchover

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

How to configure an Oracle 18c read-only Oracle Home

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

Oracle 18c New Feature Read-Only Oracle Homes

$
0
0

One of the new features of Oracle Database 18c is that we can now configure an Oracle Home in read-only mode.

In a read-only Oracle home, all the configuration files like database init.ora, password files, listener.ora, tnsnames.ora as well as related log files reside outside of the read-only Oracle home.

This feature allows us to use the read-only Oracle home as a ‘master or gold’ software image that can be distributed across multiple servers. So it enables mass provisioning and also simplifies the patching process where hundreds of target servers are potentially required to have a patch applied. Here we patch the ‘master’ read-only Oracle Home and this image can then be deployed on multiple target servers seamlessly.

To configure a read-only Oracle Home, we need to do a software only 18c installation – that is we do not create a database as part of the software installation.

We then run a command roohctl -enable which will configure the Oracle Home in read-only mode.

In addition to the ORACLE_HOME and ORACLE_BASE variables, we have a new variable defined called ORACLE_BASE_CONFIG and like the oratab file we have an additional file called orabasetab.

So in an 18c read-only Oracle Home, for example the dbs directory is now not located as it was traditionally under the $ORACLE_HOME/dbs, but is now located under $ORACLE_BASE_CONFIG – which takes the form of a directory structure called $ORACLE_BASE/homes/<ORACLE_HOME_NAME>.

Read more about how to configure an Oracle 18c read-only Oracle Home (Members Only)

Oracle 18c New Feature Pluggable Database Switchover

$
0
0

In earlier releases prior to Oracle 18c, while we could enable Data Guard for a Multitenant Container/Pluggable database environment, we were restricted when it came to performing a Switchover or Failover – it had to be performed at the Container Database (CDB) level.

This meant that a database role reversal would affect each and every PDB hosted by the CDB undergoing a Data Guard Switchover or Failover.

In Oracle 12c  Release 2, a new feature called refreshable clone PDB was introduced. A refreshable  clone PDB is a read-only clone that can periodically synchronize itself with its source PDB.

This synchronization could be configured to happen manually or automatically based on a predefined interval for the refresh.

In Oracle 18c a new feature has been added using  the refreshable clone mechanism which enables us to now perform a switchover at the individual PDB level. So we are enabling high availability at the PDB level within the CDB.

We can now issue a command in Oracle 18c like this:

SQL> alter pluggable database orclpdb1
refresh mode manual
from orclpdb1@cdb2_link
switchover;

After the switchover completes, the original source PDB becomes the refreshable clone PDB (which can only be opened in READ ONLY mode), while the original refreshable clone PDB is now open in read/write mode functioning as a source PDB.

How to perform a Switchover for a Pluggable Database (Members Only)


How to perform an Oracle 18c Container and Pluggable Database upgrade using dbupgrade

$
0
0
You need to be logged in to see this part of the content. Please Login to access.

Minimizing downtime for Container and Pluggable Database (CDB and PDB) Upgrades

$
0
0

Oracle 12c Release 2 introduced a number of new features which would help reduce outages required for database upgrades.

Multitenancy feature was introduced in Oracle 12.1 and these features introduced in Oracle 12.2 would help significantly in not only reducing the overall duration required for a database upgrade, but also in managing and having more control over a Container and Pluggable Database (CDB and PDB) upgrades.

Let us take a look at some of those features.

The Parallel Upgrade Utility catctl.pl reduces the amount of time it takes to perform an upgrade by loading the database dictionary in parallel using multiple SQL processes to upgrade the database. It takes into account the available CPU resources on the server hosting the database being upgraded.

In 12.2 we can now run the database upgrade directly using the dbupgrade shell script located in the $ORACLE_HOME/bin directory.

The dbupgrade script is like a wrapper script for catctl.pl and we can use a number of flags with dbupgrade command to influence the ‘degree’ of parallelism which will be used for the upgrade as well as controlling which PDBs are to be included or excluded in the upgrade as well as assign a prioritised order in which the PDBs will be upgraded.

We can resume a failed upgrade once the problem which caused the upgrade has been fixed (with the -R flag). This feature is available both for the GUI Database Upgrade Assistant (DBUA) utility as well as upgrades performed via the command line.

The Parallel Upgrade Utility Parameters -n and -N will determine how many PDBs will be upgraded in parallel as well as enable us to control the number of parallel SQL processes to use when upgrading databases.

We can also run the Parallel Upgrade Utility in Emulation mode (with the -E flag) and this will show us the various parameters which will be used to actually upgrade the database. We can verify the output and determine if we need to change any parameters before we actually perform the database upgrade. For example, we can run an upgrade emulation to obtain more information about how the resource allocation choices made using the -n and -N parameters are carried out.

The ability to prioritize the upgrade of the individual PDBs in a CDB when you upgrade the CDB is also now available. So we can now upgrade a certain PDB first before other PDBs and once the upgrade is over open it in Read-Write mode and available for application access – while other PDBs are still being upgraded. This can be achieved by creating a Priority List text file and using the -L flag to point to the location of this file.

We can also use the -M flag which places the CDB and all its PDBs in upgrade mode resulting in a reduced overall total upgrade time. However, you cannot bring up any individual PDB until the CDB and all its associated PDBs are upgraded.

Application data tablespaces can also be placed in Read-Only mode for the duration of the upgrade using the -T flag. The tablespaces are automatically put into Read-Write mode once the upgrade is completed. This may help in just having to do a partial restore in case we have a failure during the upgrade and wish to ‘rollback’. Think of this as something similar to creating a Guaranteed Restore Point as part of your database upgrade strategy. This feature would be particularly useful if we were upgrading large Data Warehouse databases typically running in NOARCHIVELOG mode or where we cannot use the FLASHBACK DATABASE feature in cases like using the Standard Edition of the Oracle database software.

 

How to perform an Oracle 18c Container and Pluggable Database upgrade using dbupgrade (Members Only)

Oracle Data Integration Platform Cloud (DIPC)

$
0
0

Oracle Data Integration Platform Cloud (DIPC) is an Oracle Cloud Service which offers a single platform to connect to and integrate with any number of heterogeneous both on-premise as well as Cloud data sources.

Proven existing data integration technologies like Oracle GoldenGate, Oracle Data Integrator and Oracle Enterprise Data Quality are transformed into a single unified cloud platform that is able to access and manipulate hundreds of data sources.

It provides the tools to seamlessly  move data – batch as well as real-time –  between cloud and on-premises data sources as well as between third-party Clouds and the Oracle Cloud.

So what can we do with Data Integration Platform Cloud?

  • Extract, load, and transform data entities of any shape and format (ODI)
  • Synchronize or replicate selected data sources (GoldenGate)
  • Perform data analytics and maintain data quality (EDQ)
  • Profile, cleanse, analyze, and govern your data entities
  • Integrate with big data technologies to ingest, transform and stream data
  • Harnesses the capabilities of Oracle Stream Analytics to correlate events, add machine learning to applications, detect patterns, get intelligent data from logs and make real-time decisions by reacting intelligently
  • Zero down time  data migration

Coming soon ….!

Learn how to create and configure Data Integration Platform Cloud Instance, download and deploy remote agents and perform a zero downtime database upgrade of on-premise databases using the remote agents.

 

Oracle GoldenGate 18c Upgrade

$
0
0

This note outlines the procedure followed to upgrade GoldenGate 12.3 to the latest 18c version (18.1.0.0.0).

Note:

  • If we are upgrading from Oracle GoldenGate 11.2.1.0.0 or earlier, we also need to upgrade the Replicat checkpoint table via the GGSCI command UPGRADE CHECKPOINTTABLE [owner.table]
  • If we are using trigger-based DDL replication support, then additional steps need to be carried out which are described in more detail in the GoldenGate Upgrade documentation outlined in the URL below:

https://docs.oracle.com/en/middleware/goldengate/core/18.1/upgrade/upgrading-release-oracle-database.html#GUID-9B490BE5-F0AE-44D1-B63C-F5299B9DFD16

In this example, the source database version is higher than 11.2.0.4 and we are using Integrated Extract where DDL capture support is integrated into the database logmining server.
 
 

  • Verify that there are no open and uncommitted transactions

 

GGSCI (rac01.localdomain) 2> send ext1 showtrans
Sending SHOWTRANS request to EXTRACT EXT1 ...
No transactions found.

GGSCI (rac01.localdomain) 3> send ext1 logend
Sending LOGEND request to EXTRACT EXT1 ...
YES

 

  • Stop the Extract (and Pump)

 

GGSCI (rac01.localdomain) 5> stop extract * 

Sending STOP request to EXTRACT EXT1 ...
Request processed.

Sending STOP request to EXTRACT PUMP1 ...
Request processed.

 

  • Ensure Replicat has finished processing all current DML and DDL data in the Oracle GoldenGate trails before stopping the replicat

Issue the command SEND REPLICAT with the STATUS option until it returns a status of “At EOF” to indicate that it finished processing all of the data in the trail file.
 

GGSCI (rac01.localdomain) 4> send rep1 status 
Sending STATUS request to REPLICAT REP1 ...
  Current status: At EOF
  Sequence #: 2
  RBA: 1,538
  0 records in current transaction.

GGSCI (rac01.localdomain) 6> stop replicat * 

Sending STOP request to REPLICAT REP1 ...
Request processed.

 

  • Stop the Manager process

 

GGSCI (rac01.localdomain) 7> stop mgr !

Sending STOP request to MANAGER ...
Request processed.
Manager stopped.

 

  • Take a backup of the current Oracle GoldenGate installation directory on the source and target systems as well as any working directories that have been installed for a cluster configuration on a shared file system like dirprm,dircrd,dirchk,BR,dirwlt,dirrpt etc.

We do not need to backup up the dirdat folder which contain the trail files.

It is recommended to upgrade both the source as well as target Oracle GoldenGate environments at the same time.

If we are not upgrading Replicat on the target systems at the same time as the source, add the following parameter to the Extract parameter file(s) to specify the version of Oracle GoldenGate that is running on the target.

This parameter causes Extract to write a version of the trail that is compatible with the older version of Replicat.

{EXTTRAIL | RMTTRAIL} file_name FORMAT RELEASE major.minor

For example:

EXTTRAIL ./dirdat/lt FORMAT RELEASE 12.3

  • On both source and target Goldengate environments install Oracle GoldenGate 18c (18.1.0) using Oracle Universal Installer (OUI) into an existing Oracle GoldenGate directory.

Note: Ensure the checkbox to start the Manager is not ticked.
 

[oracle@rac01 sf_software]$ cd 181000_fbo_ggs_Linux_x64_shiphome
[oracle@rac01 181000_fbo_ggs_Linux_x64_shiphome]$ cd fbo_ggs_Linux_x64_shiphome/
[oracle@rac01 fbo_ggs_Linux_x64_shiphome]$ cd Disk1
oracle@rac01 Disk1]$ ./runInstaller 

 

 

 


 

  • Execute the ulg.sql script located in the GoldenGate software root directory as SYSDBA. This script converts the existing supplemental log groups to the format as required by the new release.

 

[oracle@rac01 goldengate]$ sqlplus sys as sysdba

SQL*Plus: Release 12.2.0.1.0 Production on Tue Jan 8 11:26:34 2019

Copyright (c) 1982, 2016, Oracle.  All rights reserved.

Enter password: 

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> @ulg.sql
Oracle GoldenGate supplemental log groups upgrade script.
Please do not execute any DDL while this script is running. Press ENTER to continue.


PL/SQL procedure successfully completed.

 

  • After the installation/upgrade is completed, alter the primary Extract process as well as the associated data pump Extract processes to write to a new trail sequence number via the ETROLLOVER command.

Reposition both the existing Extract Pump as well as the Replicat processes to start reading from and processing the new trail file.

[oracle@rac01 goldengate]$ ./ggsci

Oracle GoldenGate Command Interpreter for Oracle
Version 18.1.0.0.0 OGGCORE_18.1.0.0.0_PLATFORMS_180928.0432_FBO
Linux, x64, 64bit (optimized), Oracle 12c on Sep 29 2018 04:22:21
Operating system character set identified as UTF-8.

Copyright (C) 1995, 2018, Oracle and/or its affiliates. All rights reserved.



GGSCI (rac01.localdomain) 1> info all 

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     STOPPED                                           
EXTRACT     STOPPED     EXT1        00:00:00      01:13:55    
EXTRACT     STOPPED     PUMP1       00:00:00      01:13:55

GGSCI (rac01.localdomain) 2> alter extract ext1 etrollover 

2019-01-08 00:44:13  INFO    OGG-01520  Rollover performed.  For each affected output trail of Version 10 or higher format, after starting the source extract, issue ALTER EXTSEQNO for that trail's reader (either pump EXTRACT or REPLICAT) to move the reader's scan to the new trail file;  it will not happen automatically.
EXTRACT altered.


GGSCI (rac01.localdomain) 3> alter extract pump1 etrollover 

2019-01-08 00:44:51  INFO    OGG-01520  Rollover performed.  For each affected output trail of Version 10 or higher format, after starting the source extract, issue ALTER EXTSEQNO for that trail's reader (either pump EXTRACT or REPLICAT) to move the reader's scan to the new trail file;  it will not happen automatically.
EXTRACT altered.


GGSCI (rac01.localdomain) 4> info ext1 detail 

EXTRACT    EXT1      Initialized   2019-01-07 14:36   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:53 ago)
Log Read Checkpoint  Oracle Integrated Redo Logs
                     2019-01-07 23:29:38
                     SCN 0.3272690 (3272690)

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/ogg1/lt                                     3          0        500 EXTTRAIL  


GGSCI (rac01.localdomain) 5> alter pump1 extseqno 3 extrba 0
EXTRACT altered.


GGSCI (rac01.localdomain) 6> info pump1 detail 

EXTRACT    PUMP1     Initialized   2019-01-08 00:45   Status STOPPED
Checkpoint Lag       00:00:00 (updated 00:00:08 ago)
Log Read Checkpoint  File /acfs_oh/app/goldengate/dirdat/ogg1/lt000000003
                     First Record  RBA 0

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/ogg2/rt                                     3          0        500 RMTTRAIL  

  
GGSCI (rac01.localdomain) 7> alter rep1 extseqno 3 extrba 0

2019-01-08 00:46:08  INFO    OGG-06594  Replicat REP1 has been altered. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start REP1 with NOFILTERDUPTRANSACTIONS option.

REPLICAT (Integrated) altered.

  • Start all the GoldenGate processes in the new GoldenGate 18c environment
GGSCI (rac01.localdomain) 8> start mgr
Manager started.


GGSCI (rac01.localdomain) 9> info all 

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     STARTING    EXT1        00:00:00      00:02:06    
EXTRACT     STARTING    PUMP1       00:00:00      00:00:50    
REPLICAT    STARTING    REP1        00:00:00      00:00:11    


GGSCI (rac01.localdomain) 10>

GGSCI (rac01.localdomain) 10> !
info all

Program     Status      Group       Lag at Chkpt  Time Since Chkpt

MANAGER     RUNNING                                           
EXTRACT     RUNNING     EXT1        00:00:00      00:00:06    
EXTRACT     RUNNING     PUMP1       00:00:00      00:00:07    
REPLICAT    RUNNING     REP1        00:00:00      00:00:03    

Oracle GoldenGate Automatic Conflict Detection and Resolution(CDR)

$
0
0

Automatic Conflict Detection and Resolution is a new feature that is specific to Oracle GoldenGate 12c (12.3.0.1) and Oracle Database 12c Release 2 (12.2.0.1) and above.

We can now configure and manage Oracle GoldenGate automatic conflict detection and resolution in the Oracle Database via the DBMS_GOLDENGATE_ADM package as well as monitor CDR using a number of data dictionary views.

This is done using the ADD_AUTO_CDR procedure which is part of the Oracle database DBMS_GOLDENGATE_ADM package.

Prior to GoldenGate 12.3 we had to use the Replicat COMPARECOLS and RESOLVECONFLICTS parameters for CDR like for example:

MAP SH.test_cdr, TARGET SH.test_cdr,&
COMPARECOLS (ON UPDATE ALL, ON DELETE ALL ),&
RESOLVECONFLICT (INSERTROWEXISTS,(DEFAULT,OVERWRITE));

There are two methods used for Automatic CDR:

a)Latest Timestamp Conflict Detection and Resolution
b)Delta Conflict Detection and Resolution

This note provides an example of Automatic CDR using Latest Timestamp Conflict Detection and Resolution method.

The environment for this example consists of two CDB’s (CDB1,CDB2) located on the same VirtualBox VM – each containing a single Pluggable Database (PDB1, PDB2). We have configured an Active-Active GoldenGate environment which replicates data between PDB1 and PDB2 and vice-versa.

So on the GoldenGate environment side we have this setup:

EXT1>>PUMP1>>REP1>>PDB2
EXT2>>PUMP2>>REP2>>PDB1

We will simulate a conflict by inserting a single row with the same Primary Key value into the same table in PDB1 and PDB2 – at the same time and we do this via a cronjob which calls a shell script which performs the DML activity.

The table name is TEST_CDR and the schema is HR.

Steps:

Execute the ADD_AUTO_CDR procedure and specify the table to configure for Automatic CDR

Note: we do this for both databases connecting as the GoldenGate administration user we have created C##OGGADMIN.

This will add an invisible column to the TEST_CDR table called CDRTS$ROW as well as create a table which has ‘tombstone’ entries which contains the LCRs of rows which are deleted/inserted to handle conflicts related to DELETE/INSERTs.

In the Replicat parameter file we need to add a parameter – MAPINVISIBLECOLUMNS.
 

SQL> conn c##oggadmin/oracle@pdb1
Connected.

SQL> BEGIN
  DBMS_GOLDENGATE_ADM.ADD_AUTO_CDR(
    schema_name => 'HR',
    table_name  => 'TEST_CDR',
  record_conflicts => TRUE);
END;
/    2    3    4    5    6    7  

PL/SQL procedure successfully completed.


SQL> COLUMN TABLE_OWNER FORMAT A15
COLUMN TABLE_NAME FORMAT A15
COLUMN TOMBSTONE_TABLE FORMAT A15
COLUMN ROW_RESOLUTION_COLUMN FORMAT A25

SELECT TABLE_OWNER,
       TABLE_NAME, 
       TOMBSTONE_TABLE,
       ROW_RESOLUTION_COLUMN 
  FROM ALL_GG_AUTO_CDR_TABLES
  ORDER BY TABLE_OWNER, TABLE_NAME;SQL> SQL> SQL> SQL> SQL>   2    3    4    5    6  

TABLE_OWNER	TABLE_NAME	TOMBSTONE_TABLE ROW_RESOLUTION_COLUMN
--------------- --------------- --------------- -------------------------
HR		TEST_CDR	DT$_TEST_CDR	CDRTS$ROW

 

View Column Group information

A column group is a logical grouping of one or more columns in a replicated table enabled for Automatic CDR where conflict detection and resolution is performed on the columns in the column group separately from the other columns in the table.

When we configure the TEST_CDR table for Automatic CDR with the ADD_AUTO_CDR procedure, all the columns in the table are added to a default column group. To define other column groups for the same table, run the ADD_AUTO_CDR_COLUMN_GROUP procedure.

The documentation states the following about Column Groups:

“Column groups enable different databases to update different columns in the same row at nearly the same time without causing a conflict. When column groups are configured for a table, conflicts can be avoided even if different databases update the same row in the table. A conflict is not detected if the updates change the values of columns in different column groups”
 

SQL> COLUMN TABLE_OWNER FORMAT A10
COLUMN TABLE_NAME FORMAT A10
COLUMN COLUMN_GROUP_NAME FORMAT A17
COLUMN COLUMN_NAME FORMAT A15
COLUMN RESOLUTION_COLUMN FORMAT A23

SELECT TABLE_OWNER,
       TABLE_NAME, 
       COLUMN_GROUP_NAME,
       COLUMN_NAME,
       RESOLUTION_COLUMN 
  FROM ALL_GG_AUTO_CDR_COLUMNS
  ORDER BY TABLE_OWNER, TABLE_NAME; 

TABLE_OWNE TABLE_NAME COLUMN_GROUP_NAME COLUMN_NAME	RESOLUTION_COLUMN
---------- ---------- ----------------- --------------- -----------------------
HR	   TEST_CDR   IMPLICIT_COLUMNS$ REC_ID		CDRTS$ROW
HR	   TEST_CDR   IMPLICIT_COLUMNS$ REC_DESC	CDRTS$ROW

 
Create two shell scripts which will perform INSERT into the TEST_CDR table and execute both the scripts at the same time via cron
 
Note: the primary key column is REC_ID and we are inserting the row in the table in PDB1 and PDB2 using the same value for REC_ID which is going to cause a conflict which needs to be resolved.
 

[oracle@rac02 ~]$ vi cdb1_dml.sh
 #!/bin/bash
 export ORACLE_HOME=/acfs_oh/product/12.2.0/dbhome_1
 export ORACLE_SID=cdb1_2
 PATH=$PATH:$ORACLE_HOME/bin
 sqlplus -s system/G#vin2407@pdb1<<EOF
 insert into test_cdr (rec_id,rec_desc) values (1,'INSERT @ PDB1');
 commit;
 EOF

[oracle@rac02 ~]$ chmod +x cdb1_dml.sh

[oracle@rac02 ~]$ vi cdb2_dml.sh
 #!/bin/bash
 export ORACLE_HOME=/acfs_oh/product/12.2.0/dbhome_1
 export ORACLE_SID=cdb2_2
 PATH=$PATH:$ORACLE_HOME/bin
 sqlplus -s system/G#vin2407@pdb2<<EOF
 insert into test_cdr (rec_id,rec_desc) values (1,'INSERT @ PDB2');
 commit;
 EOF

[oracle@rac02 ~]$ chmod +x cdb2_dml.sh

[oracle@rac02 ~]$ crontab -e
 20 14 * * * /home/oracle/cdb1_dml.sh
 20 14 * * * /home/oracle/cdb2_dml.sh

[oracle@rac02 ~]$ crontab -l
 20 14 * * * /home/oracle/cdb1_dml.sh
 20 14 * * * /home/oracle/cdb2_dml.sh

 

After the shell scripts have been automatically executed via cron, verify the row which has finally been inserted into the TEST_CDR table in both databases

Note: the row having the values 1,’INSERT @ PDB1′ has been discarded.
 

SQL> conn system/G#vin2407@pdb1
Connected.

SQL> select * from hr.test_cdr;

    REC_ID REC_DESC
---------- --------------------
	 1 INSERT @ PDB2

SQL> conn system/G#vin2407@pdb2
Connected.

SQL> /

    REC_ID REC_DESC
---------- --------------------
	 1 INSERT @ PDB2

 
Note the value of the hidden column CDRTS$ROW09-JAN-19 06.24.02.210285. This is used to resolve the INSERT conflict.
 

SQL> alter table hr.test_cdr  modify CDRTS$ROW visible;

Table altered.

SQL> select * from hr.test_cdr;

    REC_ID REC_DESC
---------- --------------------
CDRTS$ROW
---------------------------------------------------------------------------
	 1 INSERT @ PDB2
09-JAN-19 06.24.02.210285 AM

 

Who has won and who has lost?

If we query the DBA_APPLY_ERROR_MESSAGES view in PDB1 we can see the APPLIED column has the value ‘WON’ while the same column in PDB2 has the value of ‘LOST’.

We can also see that the CDRTS$ROW column has been used to resolve the INSERT ROW EXISTS conflict.

This means that the row was changed in PDB2 has been applied on PDB1 (WON) and the row which was changed on PDB1 (and which was replicated to PDB2) has been discarded at PDB2 (LOST).
 

SQL>  conn system/G#vin2407@pdb1
Connected.

SQL> select OBJECT_NAME, CONFLICT_TYPE,APPLIED_STATE,CONFLICT_INFO
  2  from DBA_APPLY_ERROR_MESSAGES;

OBJECT_NAM CONFLICT_TYPE      APPLIED CONFLICT_INFO
---------- ------------------ ------- --------------------
TEST_CDR   INSERT ROW EXISTS  WON     CDRTS$ROW:W

SQL> conn system/G#vin2407@pdb2
Connected.

SQL> /

OBJECT_NAM CONFLICT_TYPE      APPLIED CONFLICT_INFO
---------- ------------------ ------- --------------------
TEST_CDR   INSERT ROW EXISTS  LOST    CDRTS$ROW:L

 

How was the conflict resolved using the Latest Timestamp Method?

The conflict is resolved using this criteria:

If the timestamp of the row LCR is earlier than the timestamp in the table row, then the row LCR is discarded, and the table values are retained.”

So when a change is made in PDB1, EXT1 extract captures the change and writes to local trail file, PUMP1 transmits trail file over network and it is processed by REP1 inserting into database PDB2.

On the other hand when a change is made in PDB2, EXT2 extract captures the change and writes to local trail file, PUMP2 transmits trail file over network and it is processed by REP2 inserting into database PDB1.

If we look at the trail file processed by REP1 (which inserts into PDB2) using logdump utility, we can see the timestamp value of the CDRTS$ROW column is:2019-01-09:06:24:02.210285000
 

2019/01/09 14:24:02.000.630 Insert               Len    65 RBA 2771 
Name: PDB2.HR.TEST_CDR  (TDR Index: 2) 
After  Image:                                             Partition 12   G  s   
 0000 0500 0000 0100 3101 0011 0000 000d 0049 4e53 | ........1........INS  
 4552 5420 4020 5044 4232 0200 1f00 0000 3230 3139 | ERT @ PDB2......2019  
 2d30 312d 3039 3a30 363a 3234 3a30 322e 3231 3032 | -01-09:06:24:02.2102  
 3835 3030 30                                      | 85000  
Column     0 (x0000), Len     5 (x0005)  
 0000 0100 31                                      | ....1  
Column     1 (x0001), Len    17 (x0011)  
 0000 0d00 494e 5345 5254 2040 2050 4442 32        | ....INSERT @ PDB2  
Column     2 (x0002), Len    31 (x001f)  
 0000 3230 3139 2d30 312d 3039 3a30 363a 3234 3a30 | ..2019-01-09:06:24:0  
 322e 3231 3032 3835 3030 30                       | 2.210285000  

If we look at the trail file processed by REP2 (which inserts into PDB1) using logdump utility, we can see the timestamp value of the CDRTS$ROW column is:2019-01-09:06:24:02.205639000
 

2019/01/09 14:24:02.000.529 Insert               Len    65 RBA 3394 
Name: PDB1.HR.TEST_CDR  (TDR Index: 2) 
After  Image:                                             Partition 12   G  s   
 0000 0500 0000 0100 3101 0011 0000 000d 0049 4e53 | ........1........INS  
 4552 5420 4020 5044 4231 0200 1f00 0000 3230 3139 | ERT @ PDB1......2019  
 2d30 312d 3039 3a30 363a 3234 3a30 322e 3230 3536 | -01-09:06:24:02.2056  
 3339 3030 30                                      | 39000  
Column     0 (x0000), Len     5 (x0005)  
 0000 0100 31                                      | ....1  
Column     1 (x0001), Len    17 (x0011)  
 0000 0d00 494e 5345 5254 2040 2050 4442 31        | ....INSERT @ PDB1  
Column     2 (x0002), Len    31 (x001f)  
 0000 3230 3139 2d30 312d 3039 3a30 363a 3234 3a30 | ..2019-01-09:06:24:0  
 322e 3230 3536 3339 3030 30                       | 2.205639000  

 
So the value of the CDRTS$ROW in the table is higher than the value of the column as contained in the trail file, so it is ignored here and not applied to the database – whereas in the other case timestamp value in trail file file is higher than that of the database column value and it was overwritten and replaced.
 

Check the CDR statistics

 

GGSCI (rac01.localdomain) 4> stats rep2 latest reportcdr

Sending STATS request to REPLICAT REP2 ...

Start of Statistics at 2019-01-09 15:04:40.


Integrated Replicat Statistics:

	Total transactions            		           1.00
	Redirected                    		           0.00
	Replicated procedures         		           0.00
	DDL operations                		           0.00
	Stored procedures             		           0.00
	Datatype functionality        		           0.00
	Event actions                 		           0.00
	Direct transactions ratio     		           0.00%

Replicating from PDB2.HR.TEST_CDR to PDB1.HR.TEST_CDR:

*** Latest statistics since 2019-01-09 14:24:26 ***
	Total inserts                   	           1.00
	Total updates                   	           0.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	           1.00
	Total CDR conflicts                    	           1.00
	CDR resolutions succeeded              	           1.00
	CDR INSERTROWEXISTS conflicts          	           1.00

End of Statistics.
Viewing all 232 articles
Browse latest View live