Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

Oracle Exadata X5-2 Data Guard Configuration

$
0
0

This note describes the procedure of creating an Oracle 11.2.0.4 Data Guard Physical Standby database with a two-node Real Application Cluster (RAC) Primary and Standby database on an Oracle Exadata X5-2 eight rack.

The procedure will use RMAN for the creation of the Physical Standby database and will use the DUPLICATE FROM ACTIVE DATABASE method which is available in Oracle 11g.

Note – creation of the Standby database is done online while the Primary database is open and being accessed and no physical RMAN backups are utilized for the purpose of creating the standby database.

The note also describes the process of configuring Data Guard Broker to manage the Data Guard environment and also illustrates how to perform a database role-reversal via a Data Guard switch over operation.

 Download the full note ….


Presenting at AUSOUG and AIOUG Conferences in November

$
0
0

I recently presented a paper at the Australian Oracle User Group conference series held in Perth last week. The title of the presentation was Oracle Partitioning – Then and Now which basically looked at all the features which were introduced in Oracle Partitioning right from it’s introduction in Oracle 8.0 way back in 1997 until the eagerly awaited Oracle 12c release 2.

I was also a presenter at the recently concluded Sangam 2016 All-India Oracle User Group event held in Bangalore where I conducted two separate hands-on lab sessions on GoldenGate. The topic was Performing a minimal downtime database upgrade from Oracle 11g to Oracle 12c using GoldenGate.

Now an Oracle 12c Certified Master Database Cloud Administrator

$
0
0

I recently was awarded the certification Oracle 12c Certified Master Database Cloud Administrator. In addition to passing the full day practical Oracle 12c OCM upgrade exam, in addition I had to pass two other exams which were the 1Z0-027 – Oracle Exadata X3 and X4 Administration and 1Z0-028 – Oracle Database Cloud Administration exams.

Installing the Oracle GoldenGate monitoring plug-in (13.2.1.0.0) for Cloud Control 13c Release 2

Oracle GoldenGate and Oracle 12c Online Training Commencing January 2017

$
0
0

I will be conducting online training in Oracle GoldenGate as well as Oracle database including areas like RAC, Performance Training and Oracle 12c New Features.

Kindly register your interest for the training courses you are interested in via the Contact form available on the website and I will get back to you at the earliest with information about the registration process and other details.

 

Commencing week 1 and 2 (January 2017)

  • Oracle Goldengate 12.2 (Fundamentals)

 

  • Oracle GoldenGate 12.2 (Advanced)

 

 Commencing week 3 and 4 (January 2017)

  • Oracle 12c  Release 1 New Features

 

  • Oracle Performance Tuning

GoldenGate 12c Performance Tuning Webinar

$
0
0

I will be conducting two sessions of a webinar on GoldenGate Performance Tuning Tips and Techniques.

Use the link below to register for this FREE webinar!

https://attendee.gotowebinar.com/rt/6709628250976917251

Hurry as space is limited for this free webinar.

 

 

Oracle 12c GoldenGate Implementation Workshop online training

$
0
0

Oracle 12c GoldenGate Implementation Workshop online training is commencing 23rd January.

 

This 20 hour workshop will comprise topics included in the official Oracle University GoldenGate 12c Essentials, GoldenGate Advanced Configuration and GoldenGate Tuning and Troubleshooting classes.

 

Use the following links to register for the  online  training classes:

 7.00 to 9.00 PM IST Batch

https://attendee.gotowebinar.com/register/4325373570465792259

 

7.00 to 9.00 PM CST (USA) Batch

https://attendee.gotowebinar.com/register/7493591037135257347

 

The cost is only 499.00 USD and compares very favorably with the official OU course price which is over 3000 USD!

 

Oracle GoldenGate 12c Implementation Workshop

 

Course Topics and Objectives

  • Learn about Oracle GoldenGate 12c (12.2) architecture, topologies and components
  • Installation and deinstallation of GoldenGate using both OUI as well as command-line silent method
  • Configuring the Manager process
  • Prepare the Oracle database for GoldenGate replication
  • Create Classic extracts and replicat process groups
  • Create Integrated extracts and replicat process groups
  • Create Co-Ordinated replicats
  • Configure and manage DDL replication
  • Configuring security and encryption of trail files and credentials in GoldenGate
  • Column mapping
  • Data filtering and transformation
  • Using the Logdump utility to examine trail files
  • Using OBEY files, macros and tokens
  • Handling errors and exceptions in GoldenGate
  • Configuring Automatic Heartbeat Tables
  • Monitoring Lag
  • Configuring Bi-Directional replication
  • Configuring Conflict Detection and Resolution

 

All the topics listed above will carry hands-on lab exercises as well

GoldenGate Performance Tuning Webinar

$
0
0

The Oracle GoldenGate Performance Tuning Webinar was well received by over 200 attendees over two separate sessions.

Feedback received was very positive and am sharing the slide deck which can be downloaded from the link below:

Download the presentation ….

 


Installing and Configuring Oracle GoldenGate Veridata 12c

$
0
0

This note demonstrates how to install and configure Oracle GoldenGate Veridata 12c both server as well as agent.

At a high level the steps include:

  • Install Veridata Server
  • Create the GoldenGate Veridata Repository Schema using RCU
  • Configure WebLogic domain for Oracle GoldenGate Veridata
  • Start Admin and Managed Servers
  • Create the VERIDATA_ADMIN user
  • Launch and test Veridata Web User Interface
  • Install the Veridata Agent on hosts which you want to run Veridata comparison job
  • Configure and start the Veridata agent

Download the note …..

Installing and Configuring Oracle GoldenGate Monitor 12c (12.1.3.0)

$
0
0

GoldenGate Monitor is a web-based monitoring console that provides a real-time graphical overview of all the Oracle GoldenGate instances in our enterprise.

We can view statistics and alerts as well as monitor the performance of all the related GoldenGate components in all environments in our enterprise from a single console.

GoldenGate Monitor can also send alert messages to e-mail and SNMP clients.

This note describes the steps involved in installing and configuring the Oracle GoldenGate 12c Monitor Server and Monitor Agent.

At a high level, these are the different steps:

  • Install JDK 1.7
  • Install Fusion Middleware Infrastructure 12.1.3.0 which will also install Web Logic Server 12.1.3
  • From the Fusion Middleware Infrastructure home run the Repository Creation Utility (RCU) to create an Oracle GoldenGate Monitor-specific repository in an Oracle database.
  • Install Oracle GoldenGate Monitor Server (and optionally Monitor Agent)
  • Create the WebLogic Domain for GoldenGate Monitor
  • Edit the monitor.properties file
  • Configure boot.properties file for WebLogic Admin and Managed Servers
  • Start the WebLogic Admin and Managed Servers
  • Create the GoldenGate Monitor Admin user via the WebLogic Console and grant the user the appropriate roles
  • Install the Oracle GoldenGate Monitor Agent on the target hosts with running GoldenGate environments which we want to monitor
  • Configure the Monitor Agent and edit Config.properties file

Download the note ….

Oracle Database In-Memory 12c Release 2 New Features

$
0
0

The Oracle Database In-Memory 12c Release 2 New Features  webinar conducted last week was well received by a global audience and feedback was positive. For those who missed the session you can download the slide deck from the link below. Feedback and questions are welcomed!

12.2_InMemory_new_features

Oracle Database 12c Release 2 (12.2.0.1) upgrade using DBUA

$
0
0

Oracle 12c Release 2 (12.2.0.1) was officially released for on-premise deployment yesterday. I tested an upgrade of one of my test 12.1.0.2 databases using the Database Upgrade Assistant (DBUA) and the upgrade went smoothly.

The Parallel Upgrade command line utility catctl.pl has a number of changes and enhancements as compared to 12c Release 1 and I will discuss that in a later post.

Here are the screen shots of the database upgrade process.

 

upg2

 

 

upg1

 

 

upg3

 

 

 

 

 

upg5

 

 

upg6

 

 

upg7 upg8

 

 

upg9

 

 

Note – I only converted my database to NOARCHIVELOG mode because I did not have the recommended free space in the FRA. Don’t do this in production because ideally you would want to either take a backup of archivelogs or last incremental level 1 backup or set a Guaranteed Restore Point so as to be able to Flashback the database if required.

But I did see that the redo generated by the upgrade process seems to far more than that in case of earlier version upgrades. Even the DBUA recommendation was to double the Fast Recovery Area space allocation.

 

upg10

 

 

 

upg11

 

 

upg12

 

 

upg13

 

 

upg14

 

 

upg15

 

 

 

upg16

 

 

 

upg17

 

 

upg19

 

 

upg20

 

 

upg21

 

 

upg22

Oracle 12c Release 2 (12.2.0.1.0) Grid Infrastructure Upgrade

$
0
0

I recently performed an upgrade of an Oracle 12c Release 1 (12.1.0.2) Grid Infrastructure environment hosted on a RAC Virtual Box environment on my laptop to the latest release 12c Release 2 12.2.0.1.0 version.

Here are some points to be noted related to the upgrade process:

    • The Grid Infrastructure 12c Release 2 (12.2) software is now available as a single image file for direct download and installation. This greatly simplifies and enables a much quicker installation of the Grid Infrastructure software.

 

    • We just have to extract the image file linuxx64_12201_grid_home.zip into an empty directory where we want the Grid home to be located.

 

    • Once the software has been extracted we have to run the gridSetup.sh script which will launch the installer where we can perform both an initial install as well as an upgrade.

 

    • We need to have about 33 GB of free disk space in the ASM disk groups for the upgrade process.

 

    • The mount point which hosts the Grid Infrastructure home needs to have at least 12 GB of free disk space.

 

    • It is now mandatory to store the Oracle Clusterware files like the Cluster Registry (OCR) and Voting Disks on ASM and we cannot locate these files now on any kind of other shared storage system.

 

    • We have to install a mandatory patch 21255373 to the Grid Infrastructure software home.  We will see that in this case a number of prerequisite checks have failed related to memory (now needs 8 GB minimum RAM on each node ) as well as other checks related to swap size, NTP and resolv.conf – since this is test Virtual Box environment we can ignore those and continue with the upgrade – however we cannot ignore the mandatory patch 21255373 which needs to be applied to the existing 12.1.0.2 Grid Infrastructure home.

 

    • In order to install the patch, we have to also download the opatch patch 6880880 for Oracle 12.2.0.1.0 (opatch 12.2.0.1.8).

 

    • When we run opatchauto to apply the patch 21255373, we get an error java.text.ParseException: Unparseable date.  This is because time zone TZ entry AWST (Australian Western Standard Time)  is added into $ORACLE_HOME/inventory/ContentsXML/comps.xml and the opatch is using $ORACLE_HOME/jdk/jre, which is version 1.6. Java 1.6 is not able read the TZ entry AWST. We can ignore the error and continue with  the patch application – but after the patch has been applied, we have to change all the occurrences of string “AWST” in the comps.xml file to “WST” – otherwise even though we have applied the patch, the command opatch lsinventory will not show that the patch has been applied until the date format string in the comps.xml is changed as mentioned earlier.

 

    • Upgrade failed at 46% in the phase Execute Root Scripts. Ran the command crsctl stop crs -f as root on each node and clicked on the Retry button and the upgrade then continued without any error

 

    • At the end of the upgrade, the Cluster Verification Utility fails because it checks for NTP configuration appropriate for an Oracle RAC environment. NTP is not configured on this Virtual Box environment so we can ignore the error

 

 

Here are some screen shots captured while the 12c Release 2 Grid Infrastructure upgrade was in progress…..

 

gi1

 

 

gi2

 

 

gi3

 

 

gi4

 

 

 

gi5

 

 

 

gi6

 

 

 

gi7

 

 

 

gi8

 

 

gi9

 

 

 

gi10

 

 

 

gi11

 

 

 

gi12

 

 

 

gi13

 

 

 

gi14

 

 

 

gi15

 

 

 

 

gi16

 

 

 

 

gi17

 

 

 

gi18

 

 

 

gi19

 

 

Note:

Change all occurrences of “AWST” to “WST” in comps.xml file

 

gi22

 

 

Now opatch lsinventory command will show that the patch 21255373 has been applied.

 

gi23 gi24

 

 

 

gi25

 

 

 

 

gi26

 

 

 

gi27

 

 

 

gi28

 

 

 

gi29

 

 

 

gi30

 

 

 

gi31

 

 

 

gi32

 

 

 

gi33

 

 

 

gi34

 

 

 

gi35

 

 

 

gi36

 

 

gi37

Oracle Database 12c Release 2 New Feature – Create Data Guard Standby Database Using DBCA

$
0
0

One of the real nice new features in Oracle 12c Release 2 (12.2.0.1) is the ability to create an Oracle Data Guard Standby Database using DBCA (Database Configuration Assistant). This really does simplify the process of creating a standby database as well and automates a number of steps in the creation process which were earlier manually performed.

In this example we will see how a 12.2.0.1 Data Guard environment is created via DBCA and then Data Guard Broker (DGMGRL).

The source database is called salesdb and the standby database DB_UNIQUE_NAME will be salesdb_sb.

Primary database host name is host01 and the Standby database host name is host02.

The syntax is:

dbca -createDuplicateDB 
    -gdbName global_database_name 
    -primaryDBConnectionString easy_connect_string_to_primary
    -sid database_system_identifier
    [-createAsStandby 
        [-dbUniqueName db_unique_name_for_standby]]

We will run the command from the standby host host02 as shown below.
 

[oracle@host02 ~]$ dbca -silent -createDuplicateDB -gdbName salesdb -primaryDBConnectionString host01:1521/salesdb -sid salesdb -createAsStandby -dbUniqueName salesdb_sb
Enter SYS user password:
Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/salesdb_sb/salesdb.log" for further details.

Connect to the Standby Database and verify the role of the database
 
dg1

 

Note that the SPFILE and Password File for the Standby Database has been automatically created

[oracle@host02 dbs]$ ls -l sp*
-rw-r-----. 1 oracle dba 5632 Mar 22 09:40 spfilesalesdb.ora

[oracle@host02 dbs]$ ls -l ora*
-rw-r-----. 1 oracle dba 3584 Mar 17 14:38 orapwsalesdb

 

Add the required entries to the tnsnames.ora file

dg2

Continue with the Data Guard Standby Database creation using the Data Guard Broker
 

SQL> alter system set dg_broker_start=true scope=both;

System altered.

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@host01 archivelog]$ dgmgrl
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Mar 17 14:47:27 2017

connect /
Connected to "salesdb"
Connected as SYSDG.
DGMGRL> create configuration 'salesdb_dg'
> as primary database is 'salesdb'
> connect identifier is 'salesdb';
Configuration "salesdb_dg" created with primary database "salesdb"

DGMGRL> add database 'salesdb_sb' as connect identifier is 'salesdb_sb';
Database "salesdb_sb" added
DGMGRL> enable configuration;
Enabled.

 

Create the Standby Redo Log Files on the primary database

 

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u03/app/oradata/salesdb/redo03.log
/u03/app/oradata/salesdb/redo02.log
/u03/app/oradata/salesdb/redo01.log

SQL> select bytes/1048576 from v$log;

BYTES/1048576
-------------
     200
     200
     200


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

 
Create the Standby Redo Log Files on the standby database

 

DGMGRL> connect /
Connected to "salesdb"
Connected as SYSDG.

DGMGRL> edit database 'salesdb_sb' set state='APPLY-OFF';
Succeeded.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1174405120 bytes
Fixed Size          8619984 bytes
Variable Size           436209712 bytes
Database Buffers   721420288 bytes
Redo Buffers              8155136 bytes
Database mounted.

SQL>  alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

SQL> alter database open;

Database altered.

SQL>

 
Verify the Data Guard Configuration
 

DGMGRL> edit database 'salesdb_sb' set state='APPLY-ON';
Succeeded.


DGMGRL> show configuration;

Configuration - salesdb_dg

 Protection Mode: MaxPerformance

 salesdb    - Primary database
   salesdb_sb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 8 seconds ago)

 
Set the property StaticConnectIdentifier to prevent errors during switchover operations
 

Edit database ‘salesdb’ set property StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host01.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';
Edit database ‘salesdb_sb’ set property StaticConnectIdentifier=StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host02.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_sb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';

Edit listener.ora on primary database host and add the lines shown below. Reload the listener.
 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_DGMGRL)
      (SID_NAME = salesdb)
        )
  )

 
Edit listener.ora on standby database host and add the lines shown below. Reload the listener.
 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_sb_DGMGRL)
      (SID_NAME = salesdb)
        )
  )

Oracle Database 12c Release 2 New Feature – Application Containers

$
0
0

One of the new multitenancy related features in Oracle 12c Release 2 is Application Containers.

In 12c Release 1, we could have a Container database (CDB) host a number of optional pluggable databases or PDBs. Now in 12.2.0.1, the multitenancy feature has been enhanced further and we can now have not only CDBs and PDBs but also have another component called an Application Container which in essence is a hybrid of a CDB and a PDB.

So now in 12.2.0.1, a CDB can contain (optionally) user created Application Containers and then Application Containers can in turn host one or more PDBs.

For example, an Application Container can contain a number of PDBs which contain individual sales data of different regions, but at the same time can share what are called common objects.

Maybe each region’s PDB has data just for that region, but the table structure is the same regardless of the region. In that case the table definition (or metadata) is stored in the application container accessible to all the PDBs hosted by that application container. If any changes are required to be made for application tables, then that DDL change need only be made once in the central application container and that change will then be visible to all the PDBs hosted by that application container.

Or there are some tables which are common to all the PDBs – some kind of master data maybe. And rather than have to store this common data in each individual PDB (as was the case in 12.1.0.2), we just store it once in a central location which is the application container and then that data is visible to all the hosted PDBs.

In other words, an application container functions as an application specific CDB within a CDB.

Think of a Software as a Service (SaaS) deployment model where we are hosting a number of customers and each customer has its own individual data which needs to be stored securely in a separate database but at the same time we need to share some metadata or data which is common to all the customers.

Let’s have a look a simple example of 12c Release 2 Application Containers at work.

The basic steps are:

  • Create the Application Container
  • Create the Pluggable Databases
  • Install the Application
  • After installing the application, synchronize the pluggable databases with the application container root so that any changes in terms of DDL or DML made by the application are now visible to all hosted pluggable databases
  • Optionally upgrade or deinstall the application

 

Create the Application Container
 

SQL> CREATE PLUGGABLE DATABASE appcon1 AS APPLICATION CONTAINER ADMIN USER appadm IDENTIFIED BY oracle
FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/');  

Pluggable database created.

 
Create the Pluggable Databases which are to be hosted by the Application Container by connecting to the application container root
 

SQL> alter session set container=appcon1;

 

Session altered.

 

SQL> alter pluggable database open;

 

Pluggable database altered.

 

SQL> CREATE PLUGGABLE DATABASE pdbhr1 ADMIN USER pdbhr1_adm IDENTIFIED BY oracle

FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr1/');

 

Pluggable database created.

 

SQL> SQL> CREATE PLUGGABLE DATABASE pdbhr2 ADMIN USER pdbhr2_adm IDENTIFIED BY oracle

FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr2/');

 

Pluggable database created.

 

SQL> SQL> alter pluggable database all open;

 

Pluggable database altered.

 

Install the application
 
In the first example we will be seeing how some common data is being shared among all the pluggable databases. Note the keyword SHARING=DATA.
 

SQL> alter pluggable database application region_app begin install '1.0';

 

Pluggable database altered.

 

SQL> create user app_owner identified by oracle;

 

User created.

 

SQL> grant connect,resource,unlimited tablespace to app_Owner;

 

Grant succeeded.

 

SQL> create table app_owner.regions

2  sharing=data

3  (region_id number, region_name varchar2(20));

 

Table created.

 

SQL> insert into app_owner.regions

2  values (1,'North');

 

1 row created.

 

SQL> insert into app_owner.regions

2  values (2,'South');

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> alter pluggable database application region_app end install '1.0';

 

Pluggable database altered.

 

View information about Application Containers via the DBA_APPLICATIONS view

 

SQL> select app_name,app_status from dba_applications;

APP_NAME
--------------------------------------------------------------------------------
APP_STATUS
------------
APP$4BDAAF8836A20F9CE053650AA8C0AF21
NORMAL

REGION_APP
NORMAL

Synchronize the pluggable databases with the application root
 
Note that until this is done, changes made by the application install are not visible to the hosted PDBs.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL> select * from app_owner.regions;
select * from app_owner.regions
                        *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 South

SQL> alter session set container=pdbhr2;

Session altered.

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 South

 

Note that any direct DDL or DML is not permitted in this case
 

SQL> drop table app_owner.regions;
drop table app_owner.regions
                     *
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action


SQL> insert into app_owner.regions values (3,'East');
insert into app_owner.regions values (3,'East')
                      *
ERROR at line 1:
ORA-65097: DML into a data link table is outside an application action

Let us now upgrade the application we just created and create the same application table, but this time with the keyword SHARING=METADATA
 

SQL> alter pluggable database application region_app begin upgrade '1.0' to '1.1';

Pluggable database altered.

SQL> select app_name,app_status from dba_applications;

APP_NAME
--------------------------------------------------------------------------------
APP_STATUS
------------
APP$4BDAAF8836A20F9CE053650AA8C0AF21
NORMAL

REGION_APP
UPGRADING


SQL> drop table app_owner.regions; 

Table dropped.

SQL> create table app_owner.regions
  2  sharing=metadata
  3  (region_id number,region_name varchar2(20));

Table created.

SQL> alter pluggable database application region_app end upgrade;

Pluggable database altered.

 
We can now see that the table definition is the same in both the PDBs, but each PDB can now insert its own individual data in the table.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> desc app_owner.regions
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 REGION_ID					    	NUMBER
 REGION_NAME					VARCHAR2(20)

SQL> insert into app_owner.regions 
  2  values (1,'North');

1 row created.

SQL> insert into app_owner.regions 
  2  values (2,'North-East');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 North-East

SQL> alter session set container=pdbhr2;

Session altered.

SQL>  alter pluggable database application region_app sync;

Pluggable database altered.

SQL> desc app_owner.regions
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 REGION_ID					    NUMBER
 REGION_NAME					    VARCHAR2(20)

SQL> select * from app_owner.regions;

no rows selected

SQL> insert into app_owner.regions 
  2  values (1,'South');

1 row created.

SQL> insert into app_owner.regions 
  2  values (2,'South-East');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 South
	 2 South-East

 
While DML activity was permitted in this case, still any DDL activity is not permitted.
 

SQL> drop table app_owner.regions;
drop table app_owner.regions
                     *
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action


SQL> alter table app_owner.regions 
  2  add (region_location varchar2(10));
alter table app_owner.regions
*
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action

 
We will now perform another upgrade to the application and this time note the keyword SHARING=EXTENDED DATA. In this case while some portion of the data is common and shared among all the PDBs, the individual PDBs can still have the flexibility to have additional data specific to that PDB stored in the table along with the common data which is the same for all the PDBs.
 


SQL> alter session set container=appcon1;

Session altered.

SQL> alter pluggable database application region_app begin upgrade '1.1' to '1.2';

Pluggable database altered.

SQL> drop table app_owner.regions;

Table dropped.

SQL> create table app_owner.regions
  2  sharing=extended data
  3  (region_id number,region_name varchar2(20));

Table created.

SQL> insert into app_owner.regions
  2  values (1,'North');

1 row created.

SQL> commit;

Commit complete.

SQL> alter pluggable database application region_app end upgrade;

Pluggable database altered.

 
Note that the PDBs share some common data, but individual PDB can insert its own data.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL>  alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North




SQL> insert into app_owner.regions 
  2  values
  3  (2,'North-West');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 North-West

SQL> alter session set container=pdbhr2;

Session altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 South
	 2 South-East

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North


Oracle GoldenGate Consultant

Oracle Database 12.2 New Feature – Pluggable Database Performance Profiles

$
0
0

In the earlier 12.1.0.2 Oracle database version, we could limit the amount of CPU utilization as well as Parallel Server allocation at the PDB level via Resource Plans.

Now in 12c Release 2, we can not only regulate CPU and Parallelism at the Pluggable database level, but in addition we can also restrict the amount of memory that each PDB hosted by a Container Database (CDB) uses.

Further, we can also limit the amount of I/O operations that each PDB performs so that now we have a far improved Resource Manager at work ensuring that no PDB hogs all the CPU or the IO because of maybe some runaway query and thereby impacts the other PDBs hosted in the same PDB.

We can now limit the amount of SGA or PGA that an individual PDB can utilize as well as ensure that certain PDBs always are ensured a minimum level of both available SGA and PGA memory.

For example we can now issue SQL statements like these while connected to the individual PDB.

 

SQL> ALTER SYSTEM SET SGA_TARGET = 500M SCOPE = BOTH;

SQL> ALTER SYSTEM SET SGA_MIN_SIZE = 300M SCOPE = BOTH;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 200M SCOPE = BOTH;

SQL> ALTER SYSTEM SET MAX_IOPS = 10000 SCOPE = BOTH;

 
Another 12c Release 2 New Feature related to Multitenancy is Performance Profiles.

With Performance Profiles we can manage resources for large numbers of PDBs by specifying Resource Manager directives for profiles instead for each individual PDB.

These profiles are then allocated to the PDB via the initialization parameter DB_PERFORMANCE_PROFILE

Let us look at a worked example of Performance Profiles.

In this example we have three PDBs (PDB1, PDB2 and PDB3) hosted in the container database CDB1. PDB1 pluggable database hosts some mission critical applications and we need to ensure that PDB1 gets a higher share of memory,I/O as well as CPU resources as compared to PDB2 and PDB3.

So we will be enforcing this resource allocation via two sets of Performance Profiles – we call those TIER1 and TIER2.

Here are the steps:

 

Create a Pending Area

 

SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA ();

PL/SQL procedure successfully completed.

 

 

Create a CDB Resource Plan
 

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(

   plan   => 'profile_plan',

   comment => 'Performance Profile Plan allocating highest share of resources to PDB1');

END;

/ 

PL/SQL procedure successfully completed.

 
Create the CDB resource plan directives for the PDBs

Tier 1 performance profile ensures at least 60% (3 shares) of available CPU and parallel server resources and no upper limit on CPU utilization or parallel server execution. In addition it ensures a minimum allocation of at least 50% of available memory.

 

SQL> BEGIN

DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

plan                 => 'profile_plan',

profile              => 'Tier1',

shares               => 3,

memory_min           => 50);

END;

/

PL/SQL procedure successfully completed.

 

Tier 2 performance profile is more restrictive in the sense that it has fewer shares as compared to Tier 1 and limits the amount of CPU/Parallel server usage to 40% as well as limits the amount of memory usage at the PDB level to a maximum of 25% of available memory.

 

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

   plan                 => 'profile_plan',

   profile              => 'Tier2',

   shares               => 2,

   utilization_limit    => 40,

   memory_limit          => 25);

END;

/   

PL/SQL procedure successfully completed.

 

Validate and Submit the Pending Area

 

SQL> exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SQL> exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

PL/SQL procedure successfully completed.


 
Allocate Performance Profiles to PDBs

 

TIER1 Performance Profile is allocated to PDB1 and TIER2 Performance Profile is allocated to PDB2 and PDB3.

 

SQL> alter session set container=pdb1;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER1' scope=spfile;

System altered.

SQL> alter session set container=pdb2;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

SQL> alter session set container=pdb3;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

 

Set the Resource Plan at the CDB level

 

SQL> conn / as sysdba

Connected.

SQL> alter system set resource_manager_plan='PROFILE_PLAN' scope=both;

System altered.

 

Set the Performance Profiles at the PDB level

 

SQL> alter pluggable database all close immediate;

Pluggable database altered.

SQL> alter pluggable database all open;

Pluggable database altered.


 
Monitor memory utilization at PDB level

 

The V$RSRCPDBMETRIC view enables us to track the amount memory used by PDBs.

We can see that the PDB1 belonging to the profile TIER1 has almost double the memory allocated to the other two PDBs in profile TIER2.

Oracle 12.2 has a lot of new exciting features. Learn all about these at a forthcoming online training session. Contact prosolutions@gavinsoorma.com to register interest!

Oracle Database 12.2 New Feature – PDB Lockdown Profiles

$
0
0

In an earlier post I had mentioned one of the new features in Oracle Database 12.2 was the ability to set SGA and PGA memory related parameters even at the individual PDB level. So it enables us to further limit or define the resources which a particular PDB can use and enable a more efficient management of resources in a multitenant environment.

We can further in Oracle 12c Release 2 now even limit the operations which can be performed within a particular PDB as well as restrict features which can be used or enabled – all at the individual PDB level. We can also limit network connectivity a PDB can have by enabling or disabling the use of network related packages like UTL_SMTP,UTL_HTTP, UTL_TCP at the PDB level.

This is done via the new 12.2 feature called Lockdown Profiles.

We create lockdown profiles via the CREATE LOCKDOWN PROFILE statement while connected to the root CDB and after the lockdown profile has been created, we add the required restrictions or limits which we would like to enforce via the ALTER LOCKDOWN PROFILE statement.

To assign the lockdown profile to a particular PDB, we use the PDB_LOCKDOWN initialization parameter which will contain the name of the lockdown profile we have earlier created.

If we set the PDB_LOCKDOWN parameter at the CDB level, it will apply to all the PDB’s in the CDB. We can also set the PDB_LOCKDOWN parameter at the PDB level and we can maybe have different PDB_LOCKDOWN values for different PDB’s as we will see in the example below.

Let us have a look at an example of PDB Lockdown Profiles at work.

In our CDB, we have two pluggable databases PDB1 and PDB2. We want to limit some kind of operations depending on the PDB involved.

Our requirements are the following:

  • We want to ensure that in PDB1 the value for SGA_TARGET cannot be altered – so even a privileged user cannot allocate additional memory to the PDB. However if memory is available, then PGA allocation can be altered.
  • To shutdown PDB1, it can only be done if connected to the root container and not from within the Pluggable Database itself
  • The Partitioning feature is not available in PDB2

 

Create the Lockdown Profiles
 

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> create lockdown profile pdb1_profile;

Lockdown Profile created.

SQL> create lockdown profile pdb2_profile;

Lockdown Profile created.

 
Alter Lockdown Profile pdb1_profile
 

SQL> alter lockdown profile pdb1_profile
     disable statement =('ALTER SYSTEM') 
     clause=('SET')
     OPTION = ('SGA_TARGET');

Lockdown Profile altered.



SQL> alter lockdown profile pdb1_profile 
     disable statement =('ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE');

Lockdown Profile altered.

 
Alter Lockdown Profile pdb2_profile
 

SQL> alter lockdown profile pdb2_profile 
     DISABLE OPTION = ('PARTITIONING');

Lockdown Profile altered.

 
Enable the Lockdown Profiles for both PDB1 and PDB2 pluggable databases
 

SQL> conn / as sysdba
Connected.

SQL> alter session set container=PDB1;

Session altered.

SQL> alter system set PDB_LOCKDOWN='PDB1_PROFILE';

System altered.

SQL> alter session set container=PDB2;

Session altered.

SQL> alter system set PDB_LOCKDOWN='PDB2_PROFILE';

System altered.

 

Connect to PDB1 and try and increase the value of the parameter SGA_TARGET and PGA_AGGREGATE_TARGET

 
Note that we cannot alter SGA_TARGET because it is prevented by the lockdown profile in place, but we can alter PGA_AGGREGATE_TARGET because the lockdown profile clause only applies to the ALTER SYSTEM SET SGA_TARGET command.
 

SQL> alter session set container=PDB1;

Session altered.

SQL> alter system set sga_target=800m;
alter system set sga_target=800m
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set pga_aggregate_target=200m;

System altered.

 
Connect to PDB2 and try and create a partitioned table
 

SQL> CREATE TABLE testme
     (id NUMBER,
      name VARCHAR2 (60))
   PARTITION BY HASH (id)
   PARTITIONS 4    ;
CREATE TABLE testme
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

 
Connect to PDB1 and try to shutdown the pluggable database
 
Note that while we cannot shutdown PDB1, we are able to shutdown PDB2.
 

SQL> alter session set container=pdb1;

Session altered.

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> alter session set container=pdb2;

Session altered.

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

Pluggable database altered.

 

GoldenGate INSERTALLRECORDS and OGG-01154 SQL error 1400

$
0
0

The Goldengate INSERTALLRECORDS commands can be used in cases where the requirement is to have on the target database a transaction history or change data capture (CDC) tables which will keep a track of changes a table undergoes at the row level.

So every INSERT, UPDATE or DELETE statement on the source tables is captured as INSERT statements on the target database

But in certain cases update statements issued on the source database can cause the replicat process to abend with an error:

“ORA-01400: cannot insert NULL”.

This can happen when the table has not null columns that have not been updated and when the update is converted to an insert, the trail file will not have values for those columns so the insert will use nulls and consequently fail with the ORA-1400 error.


Test Case

We create two tables – SYSTEM.MYTABLES in the source database and SYSTEM.MYTABLES_CDC in the target database.

The SYSTEM.MYTABLES_CDC table in the target will have two additional columns for maintaining the CDC or Transaction history – OPER_TYPE capture the type of DML operation on the table and CHANGE_DATE which will capture the timestamp information of when the change took place.

We create a Primary Key constraint on the source table – note, the target table will have no similar constraints as rows will be inserted all the time into the CDC table regardless of whether the DML statement on the source was an INSERT, UPDATE or DELETE.

SQL> create table system.mytables
  2  (owner VARCHAR2(30) NOT NULL,
  3   table_name VARCHAR2(30) NOT NULL,
  4  tablespace_name VARCHAR2(30) NOT NULL,
  5  logging VARCHAR2(3) NOT NULL);

Table created.

SQL> alter table system.mytables add constraint pk_mytables primary key (owner,table_name);

Table altered.


SQL SYS@euro> create table system.mytables_cdc
  2  (owner VARCHAR2(30) NOT NULL,
  3    table_name VARCHAR2(30) NOT NULL,
  4  tablespace_name VARCHAR2(30) NOT NULL,
  5  logging VARCHAR2(3) NOT NULL,
  6  oper_type VARCHAR2(20),
  7  change_date TIMESTAMP);

Table created.

We now issue the ADD TRANDATA GGSCI command.

Note issuing the ADD TRANDATA command will enable supplemental logging at the table level for PK columns, UK columns and FK columns – not ALL columns.



GGSCI (ogg2.localdomain as oggsuser@sourcedb) 64> dblogin useridalias oggsuser_sourcedb
Successfully logged into database.

GGSCI (ogg2.localdomain as oggsuser@sourcedb) 65> add trandata system.mytables

Logging of supplemental redo data enabled for table SYSTEM.MYTABLES.
TRANDATA for scheduling columns has been added on table 'SYSTEM.MYTABLES'.
GGSCI (ogg2.localdomain as oggsuser@sourcedb) 66> info trandata system.mytables

Logging of supplemental redo log data is enabled for table SYSTEM.MYTABLES.

Columns supplementally logged for table SYSTEM.MYTABLES: OWNER, TABLE_NAME.


We can query the DBA_LOG_GROUPS view to get information about the supplemental logging added for the the table MYTABLES.

The ADD TRANDATA command has created a supplmental log group called GGS_729809 and we can see that supplemental logging is enabled for all columns part of a primary key, unique key or foreign key constraint.


SQL> SELECT
  2  LOG_GROUP_NAME,
  3   TABLE_NAME,
  4  DECODE(ALWAYS, 'ALWAYS', 'Unconditional','CONDITIONAL', 'Conditional') ALWAYS,
  5  LOG_GROUP_TYPE
  6  FROM DBA_LOG_GROUPS
  7   WHERE TABLE_NAME='MYTABLES' AND OWNER='SYSTEM';

no rows selected

SQL> /

                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
GGS_72909            MYTABLES        Unconditional  USER LOG GROUP
SYS_C009814          MYTABLES        Unconditional  PRIMARY KEY LOGGING
SYS_C009815          MYTABLES        Conditional    UNIQUE KEY LOGGING
SYS_C009816          MYTABLES        Conditional    FOREIGN KEY LOGGING



SQL> select LOG_GROUP_NAME,COLUMN_NAME from DBA_LOG_GROUP_COLUMNS
  2  where OWNER='SYSTEM' and TABLE_NAME='MYTABLES'
  3  order by 1,2;


Log Group            COLUMN_NAME
-------------------- ------------------------------
GGS_72909            OWNER
GGS_72909            TABLE_NAME


Let us now test the case.

We insert some rows into the source table MYTABLES – these rows are replicated fine to the target table MYTABLES_CDC.


SQL> insert into system.mytables
  2  select OWNER,TABLE_NAME,TABLESPACE_NAME,LOGGING
  3   from DBA_TABLES
  4   where OWNER='SYSTEM' and TABLESPACE_NAME is NOT NULL;

110 rows created.

SQL> commit;

Commit complete.



SQL SYS@euro> select count(*) from system.mytables_cdc;

  COUNT(*)
----------
       110


Let us now see what happens when we run an UPDATE statement on the source database. Note the columns involved in the UPDATE are not PK or UK columns.


SQL> update system.mytables set tablespace_name='USERS' where tablespace_name='SYSTEM';

89 rows updated.

SQL> commit;

Commit complete.


Immediately we will see that the Replicat process on the target has ABENDED an d if we examine the Replicat report log we can see the error message as shown below.

2016-06-25 14:40:26  INFO    OGG-06505  MAP resolved (entry SYSTEM.MYTABLES): MAP "SYSTEM"."MYTABLES", TARGET SYSTEM.MYTABLES_CDC, COLMAP (USEDEFAULTS, CHANGE_DATE=@GETENV ('GGHEADER', 'COM
MITTIMESTAMP'), OPER_TYPE=@GETENV ('GGHEADER', 'OPTYPE')).

2016-06-25 14:40:46  WARNING OGG-06439  No unique key is defined for table MYTABLES_CDC. All viable columns will be used to represent the key, but may not guarantee uniqueness. KEYCOLS may
be used to define the key.
Using the following default columns with matching names:
  OWNER=OWNER, TABLE_NAME=TABLE_NAME, TABLESPACE_NAME=TABLESPACE_NAME, LOGGING=LOGGING

2016-06-25 14:40:46  INFO    OGG-06510  Using the following key columns for target table SYSTEM.MYTABLES_CDC: OWNER, TABLE_NAME, TABLESPACE_NAME, LOGGING, OPER_TYPE, CHANGE_DATE.


2016-06-25 14:45:18  WARNING OGG-02544  Unhandled error (ORA-26688: missing key in LCR) while processing the record at SEQNO 7, RBA 19037 in Integrated mode. REPLICAT will retry in Direct m
ode.

2016-06-25 14:45:18  WARNING OGG-01154  SQL error 1400 mapping SYSTEM.MYTABLES to SYSTEM.MYTABLES_CDC OCI Error ORA-01400: cannot insert NULL into ("SYSTEM"."MYTABLES_CDC"."LOGGING") (statu
s = 1400), SQL .


There is a column called LOGGING which is a NOT NULL column – the GoldenGate trail file has information about the other columns – OWNER, TABLE_NAME and TABLESPACE_NAME.

But there is no data captured in the trail file for the LOGGING column.

Using the LOGDUMP utility we can see this.

Logdump 103 >open ./dirdat/rt000007
Current LogTrail is /ogg/euro/dirdat/rt000007
Logdump 104 >ghdr on
Logdump 105 >detail on
Logdump 106 >detail data
Logdump 107 >pos 32008
Reading forward from RBA 32008
Logdump 108 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    52  (x0034)   IO Time    : 2016/06/25 14:45:02.999.764
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x02)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         67       AuditPos   : 8056764
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/06/25 14:45:02.999.764 FieldComp            Len    52 RBA 32008
Name: SYSTEM.MYTABLES
After  Image:                                             Partition 4   G  e
 0000 000a 0000 0006 5359 5354 454d 0001 0015 0000 | ........SYSTEM......
 0011 4c4f 474d 4e52 5f50 4152 414d 4554 4552 2400 | ..LOGMNR_PARAMETER$.
 0200 0900 0000 0555 5345 5253                     | .......USERS
Column     0 (x0000), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     1 (x0001), Len    21 (x0015)
 0000 0011 4c4f 474d 4e52 5f50 4152 414d 4554 4552 | ....LOGMNR_PARAMETER
 24                                                | $
Column     2 (x0002), Len     9 (x0009)
 0000 0005 5553 4552 53                            | ....USERS


The table has not null columns that have not been updated (column LOGGING was not part of the update statement).

If the column was not updated on the update statement, when the update is converted to an insert, the trail file will not have values for that column and so the insert will use nulls and consequently fail with the ORA-1400, so this is an expected behavior.

We can see that the update on source database is converted into an insert statement on the target – this is because of the INSERTALLRECORDS parameter we are using in the Replicat parameter file.

.

So the solution is that we need to enable supplemental logging for ALL columns at the source database table.

We will now add supplemental log data to all columns

SQL> alter table system.mytables add supplemental log data (ALL) columns;

Table altered.

Note the DBA_LOG_GROUPS view as well as the ADD TRANDATA command now shows all the columns have supplemental logging enabled.


SELECT
 LOG_GROUP_NAME,
  TABLE_NAME,
 DECODE(ALWAYS, 'ALWAYS', 'Unconditional','CONDITIONAL', 'Conditional') ALWAYS,
 LOG_GROUP_TYPE
  FROM DBA_LOG_GROUPS
  WHERE TABLE_NAME='MYTABLES' AND OWNER='SYSTEM';
SQL>   2    3    4    5    6    7
                                     Conditional or
Log Group            Table           Unconditional  Type of Log Group
-------------------- --------------- -------------- --------------------
GGS_72909            MYTABLES        Unconditional  USER LOG GROUP
SYS_C009814          MYTABLES        Unconditional  PRIMARY KEY LOGGING
SYS_C009815          MYTABLES        Conditional    UNIQUE KEY LOGGING
SYS_C009816          MYTABLES        Conditional    FOREIGN KEY LOGGING
SYS_C009817          MYTABLES        Unconditional  ALL COLUMN LOGGING


GGSCI (ogg2.localdomain as oggsuser@sourcedb) 12> info trandata system.mytables

Logging of supplemental redo log data is enabled for table SYSTEM.MYTABLES.

Columns supplementally logged for table SYSTEM.MYTABLES: ALL.


SQL> alter system switch logfile;

System altered.

Note: STOP and RESTART the Extract and Pump

Note the position where the Extract pump was writing to.

GGSCI (ogg2.localdomain as oggsuser@sourcedb) 28> info pext1 detail

EXTRACT    PEXT1     Last Started 2016-06-25 15:04   Status RUNNING
Checkpoint Lag       00:00:00 (updated 00:00:06 ago)
Process ID           31081
Log Read Checkpoint  File ./dirdat/lt000012
                     2016-06-25 15:05:16.927851  RBA 1476

  Target Extract Trails:

  Trail Name                                       Seqno        RBA     Max MB Trail Type

  ./dirdat/rt                                          9       1522        100 RMTTRAIL


Delete and recreate the Integrated Replicat

GGSCI (ogg1.localdomain as oggsuser@euro) 2> delete replicat rep2

2016-06-25 15:07:11  WARNING OGG-02541  Replicat could not process some SQL errors before being dropped or unregistered. This may cause the data to be out of sync.

2016-06-25 15:07:14  INFO    OGG-02529  Successfully unregistered REPLICAT REP2 inbound server OGG$REP2 from database.
Deleted REPLICAT REP2.


GGSCI (ogg1.localdomain as oggsuser@euro) 3> add replicat rep2 integrated exttrail ./dirdat/rt
REPLICAT (Integrated) added.

Restart the replicat from the point where it had abended

GGSCI (ogg1.localdomain as oggsuser@euro) 4> alter rep2 extseqno 9 extrba 1522

2016-06-25 15:07:55  INFO    OGG-06594  Replicat REP2 has been altered through GGSCI. Even the start up position might be updated, duplicate suppression remains active in next startup. To override duplicate suppression, start REP2 with NOFILTERDUPTRANSACTION option.

REPLICAT (Integrated) altered.

Now run a similar update statement which earlier had caused the replicat to abend


SQL> update system.mytables set tablespace_name='SYSTEM'  where tablespace_name='USERS';

89 rows updated.

SQL> commit;

Commit complete.

We can see that this time the replicat has successfully applied the changes on the target table – 89 rows which were updated on the source table have now been transformed into 89 INSERT statements in the CDC table on the target database.

GGSCI (ogg1.localdomain as oggsuser@euro) 14> stats replicat rep2 table SYSTEM.MYTABLES_CDC latest

Sending STATS request to REPLICAT REP2 ...

Start of Statistics at 2016-06-25 15:11:59.

.....
......


Replicating from SYSTEM.MYTABLES to SYSTEM.MYTABLES_CDC:

*** Latest statistics since 2016-06-25 15:11:09 ***
        Total inserts                                     89.00
        Total updates                                      0.00
        Total deletes                                      0.00
        Total discards                                     0.00
        Total operations                                  89.00

End of Statistics.

If we now examine the trail file on the target, we can see that this time all the table columns including the LOGGING column (which was missing earlier) has been captured in the trail file

Logdump 109 >open ./dirdat/rt000009
Current LogTrail is /ogg/euro/dirdat/rt000009
Logdump 110 >ghdr on
Logdump 111 >detail on
Logdump 112 >detail data
Logdump 113 >pos 1522
Reading forward from RBA 1522
Logdump 114 >n
___________________________________________________________________
Hdr-Ind    :     E  (x45)     Partition  :     .  (x04)
UndoFlag   :     .  (x00)     BeforeAfter:     A  (x41)
RecLength  :    56  (x0038)   IO Time    : 2016/06/25 15:10:52.999.941
IOType     :    15  (x0f)     OrigNode   :   255  (xff)
TransInd   :     .  (x00)     FormatType :     R  (x52)
SyskeyLen  :     0  (x00)     Incomplete :     .  (x00)
AuditRBA   :         68       AuditPos   : 186384
Continued  :     N  (x00)     RecCount   :     1  (x01)

2016/06/25 15:10:52.999.941 FieldComp            Len    56 RBA 1522
Name: SYSTEM.MYTABLES
After  Image:                                             Partition 4   G  b
 0000 000a 0000 0006 5359 5354 454d 0001 000d 0000 | ........SYSTEM......
 0009 4d59 4f42 4a45 4354 5300 0200 0a00 0000 0653 | ..MYOBJECTS........S
 5953 5445 4d00 0300 0700 0000 0359 4553           | YSTEM........YES
Column     0 (x0000), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     1 (x0001), Len    13 (x000d)
 0000 0009 4d59 4f42 4a45 4354 53                  | ....MYOBJECTS
Column     2 (x0002), Len    10 (x000a)
 0000 0006 5359 5354 454d                          | ....SYSTEM
Column     3 (x0003), Len     7 (x0007)
 0000 0003 5945 53                                 | ....YES

Note the data in the CDC table on the target

SQL SYS@euro>  select tablespace_name,oper_type from system.mytables_cdc
  2   where TABLE_NAME ='MYTABLES';

TABLESPACE_NAME                OPER_TYPE
------------------------------ --------------------
SYSTEM                         INSERT
USERS                          SQL COMPUPDATE
SYSTEM                         SQL COMPUPDATE

Oracle 12c Resource Manager – CDB and PDB resource plans

$
0
0

In a CDB since we have multiple pluggable databases sharing a set of common resources, we can prevent multiple workloads to compete with each other for both system as well as CDB resources by using Resource Manager.

Let us look at an example of managing resources for Pluggable Databases (between PDB’s) at the multitenant Container database level as well as within a particular PDB.

The same can be achieved using 12c Cloud Control, but displayed here are the steps to be performed at the command line using the DBMS_RESOURCE_MANAGER package.

With Resource Manager at the Pluggable Database level, we can limit CPU usage of a particular PDB as well as the number of parallel execution servers which a particular PDB can use.

To allocate resources among PDB’s we use a concept of shares where we assign shares to particular PDB’s and a higher share to a PDB results in higher allocation of guaranteed resources to that PDB.

At a high level the steps involved include:

• Create a Pending Area

• Create a CDB resource plan

• Create directives for the PDB’s

• Optionally update the default directives which will specify resources which any newly created PDB’s will be allocated or which will be used when no directives have been explicitly defined for a particular PDB

• Optionally update the directives which apply by default to the Automatic Maintenance Tasks which are configured to run in the out of the box maintenance windows

• Validate the Pending Area

• Submit the Pending Area

• Enable the plan at the CDB level by setting the RESOURCE_MANAGER_PLAN parameter

Let us look at an example.

We have 5 Pluggable databases contained in the Container database and we wish to enable resource management at the PDB level.

We wish to guarantee CPU allocation in the ratio 4:3:1:1:1 so that the CPU is distributed among the PDB’s in this manner:

PDBPROD1 : 40%
PDBPROD2: 30%
PDBPROD3: 10%
PDBPROD4 : 10%
PDBPROD5: 10%

Further for PDB’s PDBPROD3, PDBPROD4 and PDBPROD5 we wish to ensure that CPU utilization for these 3 PDB’s never crosses the 70% limit.

Also for these 3 PDB’s we would like to limit the maximum number of parallel execution servers available to the PDB.

The value of 70% means that if the PARALLEL_SERVERS_TARGET initialization parameter is 200, then the PDB cannot use more than a maximum of 140 parallel execution servers. For PDBPROD1 and PDBPROD2 there is no limit, so they can use all 200 parallel execution servers if available.

We also want to limit the resources used by the Automatic Maintenance Tasks jobs when they do execute in a particular job window and also want to specify a default resource allocation limit for newly created PDB’s or those PDB’s where a resource limit directive has not been explicitly defined.

Download the note …

Viewing all 232 articles
Browse latest View live