Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

Oracle Exadata X5-2 Data Guard Configuration

$
0
0

This note describes the procedure of creating an Oracle 11.2.0.4 Data Guard Physical Standby database with a two-node Real Application Cluster (RAC) Primary and Standby database on an Oracle Exadata X5-2 eight rack.

The procedure will use RMAN for the creation of the Physical Standby database and will use the DUPLICATE FROM ACTIVE DATABASE method which is available in Oracle 11g.

Note – creation of the Standby database is done online while the Primary database is open and being accessed and no physical RMAN backups are utilized for the purpose of creating the standby database.

The note also describes the process of configuring Data Guard Broker to manage the Data Guard environment and also illustrates how to perform a database role-reversal via a Data Guard switch over operation.

 Download the full note ….


Presenting at AUSOUG and AIOUG Conferences in November

$
0
0

I recently presented a paper at the Australian Oracle User Group conference series held in Perth last week. The title of the presentation was Oracle Partitioning – Then and Now which basically looked at all the features which were introduced in Oracle Partitioning right from it’s introduction in Oracle 8.0 way back in 1997 until the eagerly awaited Oracle 12c release 2.

I was also a presenter at the recently concluded Sangam 2016 All-India Oracle User Group event held in Bangalore where I conducted two separate hands-on lab sessions on GoldenGate. The topic was Performing a minimal downtime database upgrade from Oracle 11g to Oracle 12c using GoldenGate.

Now an Oracle 12c Certified Master Database Cloud Administrator

$
0
0

I recently was awarded the certification Oracle 12c Certified Master Database Cloud Administrator. In addition to passing the full day practical Oracle 12c OCM upgrade exam, in addition I had to pass two other exams which were the 1Z0-027 – Oracle Exadata X3 and X4 Administration and 1Z0-028 – Oracle Database Cloud Administration exams.

Installing the Oracle GoldenGate monitoring plug-in (13.2.1.0.0) for Cloud Control 13c Release 2

Oracle GoldenGate and Oracle 12c Online Training Commencing January 2017

$
0
0

I will be conducting online training in Oracle GoldenGate as well as Oracle database including areas like RAC, Performance Training and Oracle 12c New Features.

Kindly register your interest for the training courses you are interested in via the Contact form available on the website and I will get back to you at the earliest with information about the registration process and other details.

 

Commencing week 1 and 2 (January 2017)

  • Oracle Goldengate 12.2 (Fundamentals)

 

  • Oracle GoldenGate 12.2 (Advanced)

 

 Commencing week 3 and 4 (January 2017)

  • Oracle 12c  Release 1 New Features

 

  • Oracle Performance Tuning

GoldenGate 12c Performance Tuning Webinar

$
0
0

I will be conducting two sessions of a webinar on GoldenGate Performance Tuning Tips and Techniques.

Use the link below to register for this FREE webinar!

https://attendee.gotowebinar.com/rt/6709628250976917251

Hurry as space is limited for this free webinar.

 

 

Oracle 12c GoldenGate Implementation Workshop online training

$
0
0

Oracle 12c GoldenGate Implementation Workshop online training is commencing 23rd January.

 

This 20 hour workshop will comprise topics included in the official Oracle University GoldenGate 12c Essentials, GoldenGate Advanced Configuration and GoldenGate Tuning and Troubleshooting classes.

 

Use the following links to register for the  online  training classes:

 7.00 to 9.00 PM IST Batch

https://attendee.gotowebinar.com/register/4325373570465792259

 

7.00 to 9.00 PM CST (USA) Batch

https://attendee.gotowebinar.com/register/7493591037135257347

 

The cost is only 499.00 USD and compares very favorably with the official OU course price which is over 3000 USD!

 

Oracle GoldenGate 12c Implementation Workshop

 

Course Topics and Objectives

  • Learn about Oracle GoldenGate 12c (12.2) architecture, topologies and components
  • Installation and deinstallation of GoldenGate using both OUI as well as command-line silent method
  • Configuring the Manager process
  • Prepare the Oracle database for GoldenGate replication
  • Create Classic extracts and replicat process groups
  • Create Integrated extracts and replicat process groups
  • Create Co-Ordinated replicats
  • Configure and manage DDL replication
  • Configuring security and encryption of trail files and credentials in GoldenGate
  • Column mapping
  • Data filtering and transformation
  • Using the Logdump utility to examine trail files
  • Using OBEY files, macros and tokens
  • Handling errors and exceptions in GoldenGate
  • Configuring Automatic Heartbeat Tables
  • Monitoring Lag
  • Configuring Bi-Directional replication
  • Configuring Conflict Detection and Resolution

 

All the topics listed above will carry hands-on lab exercises as well

GoldenGate Performance Tuning Webinar

$
0
0

The Oracle GoldenGate Performance Tuning Webinar was well received by over 200 attendees over two separate sessions.

Feedback received was very positive and am sharing the slide deck which can be downloaded from the link below:

Download the presentation ….

 


Installing and Configuring Oracle GoldenGate Veridata 12c

$
0
0

This note demonstrates how to install and configure Oracle GoldenGate Veridata 12c both server as well as agent.

At a high level the steps include:

  • Install Veridata Server
  • Create the GoldenGate Veridata Repository Schema using RCU
  • Configure WebLogic domain for Oracle GoldenGate Veridata
  • Start Admin and Managed Servers
  • Create the VERIDATA_ADMIN user
  • Launch and test Veridata Web User Interface
  • Install the Veridata Agent on hosts which you want to run Veridata comparison job
  • Configure and start the Veridata agent

Download the note …..

Installing and Configuring Oracle GoldenGate Monitor 12c (12.1.3.0)

$
0
0

GoldenGate Monitor is a web-based monitoring console that provides a real-time graphical overview of all the Oracle GoldenGate instances in our enterprise.

We can view statistics and alerts as well as monitor the performance of all the related GoldenGate components in all environments in our enterprise from a single console.

GoldenGate Monitor can also send alert messages to e-mail and SNMP clients.

This note describes the steps involved in installing and configuring the Oracle GoldenGate 12c Monitor Server and Monitor Agent.

At a high level, these are the different steps:

  • Install JDK 1.7
  • Install Fusion Middleware Infrastructure 12.1.3.0 which will also install Web Logic Server 12.1.3
  • From the Fusion Middleware Infrastructure home run the Repository Creation Utility (RCU) to create an Oracle GoldenGate Monitor-specific repository in an Oracle database.
  • Install Oracle GoldenGate Monitor Server (and optionally Monitor Agent)
  • Create the WebLogic Domain for GoldenGate Monitor
  • Edit the monitor.properties file
  • Configure boot.properties file for WebLogic Admin and Managed Servers
  • Start the WebLogic Admin and Managed Servers
  • Create the GoldenGate Monitor Admin user via the WebLogic Console and grant the user the appropriate roles
  • Install the Oracle GoldenGate Monitor Agent on the target hosts with running GoldenGate environments which we want to monitor
  • Configure the Monitor Agent and edit Config.properties file

Download the note ….

Oracle Database In-Memory 12c Release 2 New Features

$
0
0

The Oracle Database In-Memory 12c Release 2 New Features  webinar conducted last week was well received by a global audience and feedback was positive. For those who missed the session you can download the slide deck from the link below. Feedback and questions are welcomed!

12.2_InMemory_new_features

Oracle Database 12c Release 2 (12.2.0.1) upgrade using DBUA

$
0
0

Oracle 12c Release 2 (12.2.0.1) was officially released for on-premise deployment yesterday. I tested an upgrade of one of my test 12.1.0.2 databases using the Database Upgrade Assistant (DBUA) and the upgrade went smoothly.

The Parallel Upgrade command line utility catctl.pl has a number of changes and enhancements as compared to 12c Release 1 and I will discuss that in a later post.

Here are the screen shots of the database upgrade process.

 

upg2

 

 

upg1

 

 

upg3

 

 

 

 

 

upg5

 

 

upg6

 

 

upg7 upg8

 

 

upg9

 

 

Note – I only converted my database to NOARCHIVELOG mode because I did not have the recommended free space in the FRA. Don’t do this in production because ideally you would want to either take a backup of archivelogs or last incremental level 1 backup or set a Guaranteed Restore Point so as to be able to Flashback the database if required.

But I did see that the redo generated by the upgrade process seems to far more than that in case of earlier version upgrades. Even the DBUA recommendation was to double the Fast Recovery Area space allocation.

 

upg10

 

 

 

upg11

 

 

upg12

 

 

upg13

 

 

upg14

 

 

upg15

 

 

 

upg16

 

 

 

upg17

 

 

upg19

 

 

upg20

 

 

upg21

 

 

upg22

Oracle 12c Release 2 (12.2.0.1.0) Grid Infrastructure Upgrade

$
0
0

I recently performed an upgrade of an Oracle 12c Release 1 (12.1.0.2) Grid Infrastructure environment hosted on a RAC Virtual Box environment on my laptop to the latest release 12c Release 2 12.2.0.1.0 version.

Here are some points to be noted related to the upgrade process:

    • The Grid Infrastructure 12c Release 2 (12.2) software is now available as a single image file for direct download and installation. This greatly simplifies and enables a much quicker installation of the Grid Infrastructure software.

 

    • We just have to extract the image file linuxx64_12201_grid_home.zip into an empty directory where we want the Grid home to be located.

 

    • Once the software has been extracted we have to run the gridSetup.sh script which will launch the installer where we can perform both an initial install as well as an upgrade.

 

    • We need to have about 33 GB of free disk space in the ASM disk groups for the upgrade process.

 

    • The mount point which hosts the Grid Infrastructure home needs to have at least 12 GB of free disk space.

 

    • It is now mandatory to store the Oracle Clusterware files like the Cluster Registry (OCR) and Voting Disks on ASM and we cannot locate these files now on any kind of other shared storage system.

 

    • We have to install a mandatory patch 21255373 to the Grid Infrastructure software home.  We will see that in this case a number of prerequisite checks have failed related to memory (now needs 8 GB minimum RAM on each node ) as well as other checks related to swap size, NTP and resolv.conf – since this is test Virtual Box environment we can ignore those and continue with the upgrade – however we cannot ignore the mandatory patch 21255373 which needs to be applied to the existing 12.1.0.2 Grid Infrastructure home.

 

    • In order to install the patch, we have to also download the opatch patch 6880880 for Oracle 12.2.0.1.0 (opatch 12.2.0.1.8).

 

    • When we run opatchauto to apply the patch 21255373, we get an error java.text.ParseException: Unparseable date.  This is because time zone TZ entry AWST (Australian Western Standard Time)  is added into $ORACLE_HOME/inventory/ContentsXML/comps.xml and the opatch is using $ORACLE_HOME/jdk/jre, which is version 1.6. Java 1.6 is not able read the TZ entry AWST. We can ignore the error and continue with  the patch application – but after the patch has been applied, we have to change all the occurrences of string “AWST” in the comps.xml file to “WST” – otherwise even though we have applied the patch, the command opatch lsinventory will not show that the patch has been applied until the date format string in the comps.xml is changed as mentioned earlier.

 

    • Upgrade failed at 46% in the phase Execute Root Scripts. Ran the command crsctl stop crs -f as root on each node and clicked on the Retry button and the upgrade then continued without any error

 

    • At the end of the upgrade, the Cluster Verification Utility fails because it checks for NTP configuration appropriate for an Oracle RAC environment. NTP is not configured on this Virtual Box environment so we can ignore the error

 

 

Here are some screen shots captured while the 12c Release 2 Grid Infrastructure upgrade was in progress…..

 

gi1

 

 

gi2

 

 

gi3

 

 

gi4

 

 

 

gi5

 

 

 

gi6

 

 

 

gi7

 

 

 

gi8

 

 

gi9

 

 

 

gi10

 

 

 

gi11

 

 

 

gi12

 

 

 

gi13

 

 

 

gi14

 

 

 

gi15

 

 

 

 

gi16

 

 

 

 

gi17

 

 

 

gi18

 

 

 

gi19

 

 

Note:

Change all occurrences of “AWST” to “WST” in comps.xml file

 

gi22

 

 

Now opatch lsinventory command will show that the patch 21255373 has been applied.

 

gi23 gi24

 

 

 

gi25

 

 

 

 

gi26

 

 

 

gi27

 

 

 

gi28

 

 

 

gi29

 

 

 

gi30

 

 

 

gi31

 

 

 

gi32

 

 

 

gi33

 

 

 

gi34

 

 

 

gi35

 

 

 

gi36

 

 

gi37

Oracle Database 12c Release 2 New Feature – Create Data Guard Standby Database Using DBCA

$
0
0

One of the real nice new features in Oracle 12c Release 2 (12.2.0.1) is the ability to create an Oracle Data Guard Standby Database using DBCA (Database Configuration Assistant). This really does simplify the process of creating a standby database as well and automates a number of steps in the creation process which were earlier manually performed.

In this example we will see how a 12.2.0.1 Data Guard environment is created via DBCA and then Data Guard Broker (DGMGRL).

The source database is called salesdb and the standby database DB_UNIQUE_NAME will be salesdb_sb.

Primary database host name is host01 and the Standby database host name is host02.

The syntax is:

dbca -createDuplicateDB
    -gdbName global_database_name
    -primaryDBConnectionString easy_connect_string_to_primary
    -sid database_system_identifier
    [-createAsStandby
        [-dbUniqueName db_unique_name_for_standby]]

We will run the command from the standby host host02 as shown below.
 

[oracle@host02 ~]$ dbca -silent -createDuplicateDB -gdbName salesdb -primaryDBConnectionString host01:1521/salesdb -sid salesdb -createAsStandby -dbUniqueName salesdb_sb
Enter SYS user password:
Listener config step
33% complete
Auxiliary instance creation
66% complete
RMAN duplicate
100% complete
Look at the log file "/u02/app/oracle/cfgtoollogs/dbca/salesdb_sb/salesdb.log" for further details.

Connect to the Standby Database and verify the role of the database
 
dg1

 

Note that the SPFILE and Password File for the Standby Database has been automatically created

[oracle@host02 dbs]$ ls -l sp*
-rw-r-----. 1 oracle dba 5632 Mar 22 09:40 spfilesalesdb.ora

[oracle@host02 dbs]$ ls -l ora*
-rw-r-----. 1 oracle dba 3584 Mar 17 14:38 orapwsalesdb

 

Add the required entries to the tnsnames.ora file

dg2

Continue with the Data Guard Standby Database creation using the Data Guard Broker
 

SQL> alter system set dg_broker_start=true scope=both;

System altered.

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production
[oracle@host01 archivelog]$ dgmgrl
DGMGRL for Linux: Release 12.2.0.1.0 - Production on Fri Mar 17 14:47:27 2017

connect /
Connected to "salesdb"
Connected as SYSDG.
DGMGRL> create configuration 'salesdb_dg'
> as primary database is 'salesdb'
> connect identifier is 'salesdb';
Configuration "salesdb_dg" created with primary database "salesdb"

DGMGRL> add database 'salesdb_sb' as connect identifier is 'salesdb_sb';
Database "salesdb_sb" added
DGMGRL> enable configuration;
Enabled.

 

Create the Standby Redo Log Files on the primary database

 

SQL> select member from v$logfile;

MEMBER
--------------------------------------------------------------------------------
/u03/app/oradata/salesdb/redo03.log
/u03/app/oradata/salesdb/redo02.log
/u03/app/oradata/salesdb/redo01.log

SQL> select bytes/1048576 from v$log;

BYTES/1048576
-------------
     200
     200
     200


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

 
Create the Standby Redo Log Files on the standby database

 

DGMGRL> connect /
Connected to "salesdb"
Connected as SYSDG.

DGMGRL> edit database 'salesdb_sb' set state='APPLY-OFF';
Succeeded.


SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1174405120 bytes
Fixed Size          8619984 bytes
Variable Size           436209712 bytes
Database Buffers   721420288 bytes
Redo Buffers              8155136 bytes
Database mounted.

SQL>  alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo1.log' size 200m;

Database altered.


SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo2.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo3.log' size 200m;

Database altered.

SQL> alter database add standby logfile '/u03/app/oradata/salesdb/standy_redo4.log' size 200m;

Database altered.

SQL> alter database open;

Database altered.

SQL>

 
Verify the Data Guard Configuration
 

DGMGRL> edit database 'salesdb_sb' set state='APPLY-ON';
Succeeded.


DGMGRL> show configuration;

Configuration - salesdb_dg

 Protection Mode: MaxPerformance

 salesdb    - Primary database
   salesdb_sb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 8 seconds ago)

 
Set the property StaticConnectIdentifier to prevent errors during switchover operations
 

Edit database ‘salesdb’ set property StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host01.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';
Edit database ‘salesdb_sb’ set property StaticConnectIdentifier=StaticConnectIdentifier= '(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host02.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=salesdb_sb_DGMGRL)(INSTANCE_NAME=salesdb)(SERVER=DEDICATED)))';

Edit listener.ora on primary database host and add the lines shown below. Reload the listener.
 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_DGMGRL)
      (SID_NAME = salesdb)
        )
  )

 
Edit listener.ora on standby database host and add the lines shown below. Reload the listener.
 

SID_LIST_LISTENER =
  (SID_LIST =
    (SID_DESC =
      (GLOBAL_DBNAME = salesdb_sb_DGMGRL)
      (SID_NAME = salesdb)
        )
  )

Oracle Database 12c Release 2 New Feature – Application Containers

$
0
0

One of the new multitenancy related features in Oracle 12c Release 2 is Application Containers.

In 12c Release 1, we could have a Container database (CDB) host a number of optional pluggable databases or PDBs. Now in 12.2.0.1, the multitenancy feature has been enhanced further and we can now have not only CDBs and PDBs but also have another component called an Application Container which in essence is a hybrid of a CDB and a PDB.

So now in 12.2.0.1, a CDB can contain (optionally) user created Application Containers and then Application Containers can in turn host one or more PDBs.

For example, an Application Container can contain a number of PDBs which contain individual sales data of different regions, but at the same time can share what are called common objects.

Maybe each region’s PDB has data just for that region, but the table structure is the same regardless of the region. In that case the table definition (or metadata) is stored in the application container accessible to all the PDBs hosted by that application container. If any changes are required to be made for application tables, then that DDL change need only be made once in the central application container and that change will then be visible to all the PDBs hosted by that application container.

Or there are some tables which are common to all the PDBs – some kind of master data maybe. And rather than have to store this common data in each individual PDB (as was the case in 12.1.0.2), we just store it once in a central location which is the application container and then that data is visible to all the hosted PDBs.

In other words, an application container functions as an application specific CDB within a CDB.

Think of a Software as a Service (SaaS) deployment model where we are hosting a number of customers and each customer has its own individual data which needs to be stored securely in a separate database but at the same time we need to share some metadata or data which is common to all the customers.

Let’s have a look a simple example of 12c Release 2 Application Containers at work.

The basic steps are:

  • Create the Application Container
  • Create the Pluggable Databases
  • Install the Application
  • After installing the application, synchronize the pluggable databases with the application container root so that any changes in terms of DDL or DML made by the application are now visible to all hosted pluggable databases
  • Optionally upgrade or deinstall the application

 

Create the Application Container
 

SQL> CREATE PLUGGABLE DATABASE appcon1 AS APPLICATION CONTAINER ADMIN USER appadm IDENTIFIED BY oracle
FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/'); 

Pluggable database created.

 
Create the Pluggable Databases which are to be hosted by the Application Container by connecting to the application container root
 

SQL> alter session set container=appcon1;

 

Session altered.

 

SQL> alter pluggable database open;

 

Pluggable database altered.

 

SQL> CREATE PLUGGABLE DATABASE pdbhr1 ADMIN USER pdbhr1_adm IDENTIFIED BY oracle

FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr1/');

 

Pluggable database created.

 

SQL> SQL> CREATE PLUGGABLE DATABASE pdbhr2 ADMIN USER pdbhr2_adm IDENTIFIED BY oracle

FILE_NAME_CONVERT=('/u03/app/oradata/cdb1/pdbseed/','/u03/app/oradata/cdb1/appcon1/pdbhr2/');

 

Pluggable database created.

 

SQL> SQL> alter pluggable database all open;

 

Pluggable database altered.

 

Install the application
 
In the first example we will be seeing how some common data is being shared among all the pluggable databases. Note the keyword SHARING=DATA.
 

SQL> alter pluggable database application region_app begin install '1.0';

 

Pluggable database altered.

 

SQL> create user app_owner identified by oracle;

 

User created.

 

SQL> grant connect,resource,unlimited tablespace to app_Owner;

 

Grant succeeded.

 

SQL> create table app_owner.regions

2  sharing=data

3  (region_id number, region_name varchar2(20));

 

Table created.

 

SQL> insert into app_owner.regions

2  values (1,'North');

 

1 row created.

 

SQL> insert into app_owner.regions

2  values (2,'South');

 

1 row created.

 

SQL> commit;

 

Commit complete.

 

SQL> alter pluggable database application region_app end install '1.0';

 

Pluggable database altered.

 

View information about Application Containers via the DBA_APPLICATIONS view

 

SQL> select app_name,app_status from dba_applications;

APP_NAME
--------------------------------------------------------------------------------
APP_STATUS
------------
APP$4BDAAF8836A20F9CE053650AA8C0AF21
NORMAL

REGION_APP
NORMAL

Synchronize the pluggable databases with the application root
 
Note that until this is done, changes made by the application install are not visible to the hosted PDBs.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL> select * from app_owner.regions;
select * from app_owner.regions
                        *
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 South

SQL> alter session set container=pdbhr2;

Session altered.

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 South

 

Note that any direct DDL or DML is not permitted in this case
 

SQL> drop table app_owner.regions;
drop table app_owner.regions
                     *
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action


SQL> insert into app_owner.regions values (3,'East');
insert into app_owner.regions values (3,'East')
                      *
ERROR at line 1:
ORA-65097: DML into a data link table is outside an application action

Let us now upgrade the application we just created and create the same application table, but this time with the keyword SHARING=METADATA
 

SQL> alter pluggable database application region_app begin upgrade '1.0' to '1.1';

Pluggable database altered.

SQL> select app_name,app_status from dba_applications;

APP_NAME
--------------------------------------------------------------------------------
APP_STATUS
------------
APP$4BDAAF8836A20F9CE053650AA8C0AF21
NORMAL

REGION_APP
UPGRADING


SQL> drop table app_owner.regions;

Table dropped.

SQL> create table app_owner.regions
  2  sharing=metadata
  3  (region_id number,region_name varchar2(20));

Table created.

SQL> alter pluggable database application region_app end upgrade;

Pluggable database altered.

 
We can now see that the table definition is the same in both the PDBs, but each PDB can now insert its own individual data in the table.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> desc app_owner.regions
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 REGION_ID					    	NUMBER
 REGION_NAME					VARCHAR2(20)

SQL> insert into app_owner.regions
  2  values (1,'North');

1 row created.

SQL> insert into app_owner.regions
  2  values (2,'North-East');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 North-East

SQL> alter session set container=pdbhr2;

Session altered.

SQL>  alter pluggable database application region_app sync;

Pluggable database altered.

SQL> desc app_owner.regions
 Name					   Null?    Type
 ----------------------------------------- -------- ----------------------------
 REGION_ID					    NUMBER
 REGION_NAME					    VARCHAR2(20)

SQL> select * from app_owner.regions;

no rows selected

SQL> insert into app_owner.regions
  2  values (1,'South');

1 row created.

SQL> insert into app_owner.regions
  2  values (2,'South-East');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 South
	 2 South-East

 
While DML activity was permitted in this case, still any DDL activity is not permitted.
 

SQL> drop table app_owner.regions;
drop table app_owner.regions
                     *
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action


SQL> alter table app_owner.regions
  2  add (region_location varchar2(10));
alter table app_owner.regions
*
ERROR at line 1:
ORA-65274: operation not allowed from outside an application action

 
We will now perform another upgrade to the application and this time note the keyword SHARING=EXTENDED DATA. In this case while some portion of the data is common and shared among all the PDBs, the individual PDBs can still have the flexibility to have additional data specific to that PDB stored in the table along with the common data which is the same for all the PDBs.
 


SQL> alter session set container=appcon1;

Session altered.

SQL> alter pluggable database application region_app begin upgrade '1.1' to '1.2';

Pluggable database altered.

SQL> drop table app_owner.regions;

Table dropped.

SQL> create table app_owner.regions
  2  sharing=extended data
  3  (region_id number,region_name varchar2(20));

Table created.

SQL> insert into app_owner.regions
  2  values (1,'North');

1 row created.

SQL> commit;

Commit complete.

SQL> alter pluggable database application region_app end upgrade;

Pluggable database altered.

 
Note that the PDBs share some common data, but individual PDB can insert its own data.
 

SQL> alter session set container=pdbhr1;

Session altered.

SQL>  alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North




SQL> insert into app_owner.regions
  2  values
  3  (2,'North-West');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North
	 2 North-West

SQL> alter session set container=pdbhr2;

Session altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 South
	 2 South-East

SQL> alter pluggable database application region_app sync;

Pluggable database altered.

SQL> select * from app_owner.regions;

 REGION_ID REGION_NAME
---------- --------------------
	 1 North


Oracle GoldenGate Consultant

Oracle Database 12.2 New Feature – Pluggable Database Performance Profiles

$
0
0

In the earlier 12.1.0.2 Oracle database version, we could limit the amount of CPU utilization as well as Parallel Server allocation at the PDB level via Resource Plans.

Now in 12c Release 2, we can not only regulate CPU and Parallelism at the Pluggable database level, but in addition we can also restrict the amount of memory that each PDB hosted by a Container Database (CDB) uses.

Further, we can also limit the amount of I/O operations that each PDB performs so that now we have a far improved Resource Manager at work ensuring that no PDB hogs all the CPU or the IO because of maybe some runaway query and thereby impacts the other PDBs hosted in the same PDB.

We can now limit the amount of SGA or PGA that an individual PDB can utilize as well as ensure that certain PDBs always are ensured a minimum level of both available SGA and PGA memory.

For example we can now issue SQL statements like these while connected to the individual PDB.

 

SQL> ALTER SYSTEM SET SGA_TARGET = 500M SCOPE = BOTH;

SQL> ALTER SYSTEM SET SGA_MIN_SIZE = 300M SCOPE = BOTH;

SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 200M SCOPE = BOTH;

SQL> ALTER SYSTEM SET MAX_IOPS = 10000 SCOPE = BOTH;

Another 12c Release 2 New Feature related to Multitenancy is Performance Profiles.

With Performance Profiles we can manage resources for large numbers of PDBs by specifying Resource Manager directives for profiles instead for each individual PDB.

These profiles are then allocated to the PDB via the initialization parameter DB_PERFORMANCE_PROFILE

Let us look at a worked example of Performance Profiles.

In this example we have three PDBs (PDB1, PDB2 and PDB3) hosted in the container database CDB1. PDB1 pluggable database hosts some mission critical applications and we need to ensure that PDB1 gets a higher share of memory,I/O as well as CPU resources as compared to PDB2 and PDB3.

So we will be enforcing this resource allocation via two sets of Performance Profiles – we call those TIER1 and TIER2.

Here are the steps:

 

Create a Pending Area

 

SQL> exec DBMS_RESOURCE_MANAGER.CREATE_PENDING_AREA ();

PL/SQL procedure successfully completed.

 

 

Create a CDB Resource Plan

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN(

   plan   => 'profile_plan',

   comment => 'Performance Profile Plan allocating highest share of resources to PDB1');

END;

/ 

PL/SQL procedure successfully completed.

Create the CDB resource plan directives for the PDBs

Tier 1 performance profile ensures at least 60% (3 shares) of available CPU and parallel server resources and no upper limit on CPU utilization or parallel server execution. In addition it ensures a minimum allocation of at least 50% of available memory.

 

SQL> BEGIN

DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

plan                 => 'profile_plan',

profile              => 'Tier1',

shares               => 3,

memory_min           => 50);

END;

/

PL/SQL procedure successfully completed.

 

Tier 2 performance profile is more restrictive in the sense that it has fewer shares as compared to Tier 1 and limits the amount of CPU/Parallel server usage to 40% as well as limits the amount of memory usage at the PDB level to a maximum of 25% of available memory.

 

SQL> BEGIN

 DBMS_RESOURCE_MANAGER.CREATE_CDB_PROFILE_DIRECTIVE(

   plan                 => 'profile_plan',

   profile              => 'Tier2',

   shares               => 2,

   utilization_limit    => 40,

   memory_limit          => 25);

END;

/   

PL/SQL procedure successfully completed.

 

Validate and Submit the Pending Area

 

SQL> exec DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA();

PL/SQL procedure successfully completed.

SQL> exec DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA();

PL/SQL procedure successfully completed.


Allocate Performance Profiles to PDBs

 

TIER1 Performance Profile is allocated to PDB1 and TIER2 Performance Profile is allocated to PDB2 and PDB3.

 

SQL> alter session set container=pdb1;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER1' scope=spfile;

System altered.

SQL> alter session set container=pdb2;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

SQL> alter session set container=pdb3;

Session altered.

SQL> alter system set DB_PERFORMANCE_PROFILE='TIER2' scope=spfile;

System altered.

 

Set the Resource Plan at the CDB level

 

SQL> conn / as sysdba

Connected.

SQL> alter system set resource_manager_plan='PROFILE_PLAN' scope=both;

System altered.

 

Set the Performance Profiles at the PDB level

 

SQL> alter pluggable database all close immediate;

Pluggable database altered.

SQL> alter pluggable database all open;

Pluggable database altered.


Monitor memory utilization at PDB level

 

The V$RSRCPDBMETRIC view enables us to track the amount memory used by PDBs.

We can see that the PDB1 belonging to the profile TIER1 has almost double the memory allocated to the other two PDBs in profile TIER2.

Oracle 12.2 has a lot of new exciting features. Learn all about these at a forthcoming online training session. Contact prosolutions@gavinsoorma.com to register interest!

Oracle Database 12.2 New Feature – PDB Lockdown Profiles

$
0
0

In an earlier post I had mentioned one of the new features in Oracle Database 12.2 was the ability to set SGA and PGA memory related parameters even at the individual PDB level. So it enables us to further limit or define the resources which a particular PDB can use and enable a more efficient management of resources in a multitenant environment.

We can further in Oracle 12c Release 2 now even limit the operations which can be performed within a particular PDB as well as restrict features which can be used or enabled – all at the individual PDB level. We can also limit network connectivity a PDB can have by enabling or disabling the use of network related packages like UTL_SMTP,UTL_HTTP, UTL_TCP at the PDB level.

This is done via the new 12.2 feature called Lockdown Profiles.

We create lockdown profiles via the CREATE LOCKDOWN PROFILE statement while connected to the root CDB and after the lockdown profile has been created, we add the required restrictions or limits which we would like to enforce via the ALTER LOCKDOWN PROFILE statement.

To assign the lockdown profile to a particular PDB, we use the PDB_LOCKDOWN initialization parameter which will contain the name of the lockdown profile we have earlier created.

If we set the PDB_LOCKDOWN parameter at the CDB level, it will apply to all the PDB’s in the CDB. We can also set the PDB_LOCKDOWN parameter at the PDB level and we can maybe have different PDB_LOCKDOWN values for different PDB’s as we will see in the example below.

Let us have a look at an example of PDB Lockdown Profiles at work.

In our CDB, we have two pluggable databases PDB1 and PDB2. We want to limit some kind of operations depending on the PDB involved.

Our requirements are the following:

  • We want to ensure that in PDB1 the value for SGA_TARGET cannot be altered – so even a privileged user cannot allocate additional memory to the PDB. However if memory is available, then PGA allocation can be altered.
  • To shutdown PDB1, it can only be done if connected to the root container and not from within the Pluggable Database itself
  • The Partitioning feature is not available in PDB2

Create the Lockdown Profiles

SQL> show con_name

CON_NAME
------------------------------
CDB$ROOT

SQL> create lockdown profile pdb1_profile;

Lockdown Profile created.

SQL> create lockdown profile pdb2_profile;

Lockdown Profile created.

Alter Lockdown Profile pdb1_profile

SQL> alter lockdown profile pdb1_profile
     disable statement =('ALTER SYSTEM')
     clause=('SET')
     OPTION = ('SGA_TARGET');

Lockdown Profile altered.



SQL> alter lockdown profile pdb1_profile
     disable statement =('ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE');

Lockdown Profile altered.

Alter Lockdown Profile pdb2_profile

SQL> alter lockdown profile pdb2_profile
     DISABLE OPTION = ('PARTITIONING');

Lockdown Profile altered.

Enable the Lockdown Profiles for both PDB1 and PDB2 pluggable databases

SQL> conn / as sysdba
Connected.

SQL> alter session set container=PDB1;

Session altered.

SQL> alter system set PDB_LOCKDOWN='PDB1_PROFILE';

System altered.

SQL> alter session set container=PDB2;

Session altered.

SQL> alter system set PDB_LOCKDOWN='PDB2_PROFILE';

System altered.


Connect to PDB1 and try and increase the value of the parameter SGA_TARGET and PGA_AGGREGATE_TARGET

Note that we cannot alter SGA_TARGET because it is prevented by the lockdown profile in place, but we can alter PGA_AGGREGATE_TARGET because the lockdown profile clause only applies to the ALTER SYSTEM SET SGA_TARGET command.

SQL> alter session set container=PDB1;

Session altered.

SQL> alter system set sga_target=800m;
alter system set sga_target=800m
*
ERROR at line 1:
ORA-01031: insufficient privileges

SQL> alter system set pga_aggregate_target=200m;

System altered.

Connect to PDB2 and try and create a partitioned table

SQL> CREATE TABLE testme
     (id NUMBER,
      name VARCHAR2 (60))
   PARTITION BY HASH (id)
   PARTITIONS 4    ;
CREATE TABLE testme
*
ERROR at line 1:
ORA-00439: feature not enabled: Partitioning

Connect to PDB1 and try to shutdown the pluggable database

Note that while we cannot shutdown PDB1, we are able to shutdown PDB2.

SQL> alter session set container=pdb1;

Session altered.

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE
*
ERROR at line 1:
ORA-01031: insufficient privileges


SQL> alter session set container=pdb2;

Session altered.

SQL> ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;

Pluggable database altered.

 

This content is available for purchase. Please select from available options.
Login & Purchase

Oracle 12c Release 2 Partitioning New Features

$
0
0

A number of enhancements to the Oracle database Partitioning option have been introduced in Oracle Database 12c Release 2.

These include:

  • Automatic List Partitioning
  • Multi-Column List Partitioning
  • Read-only Partitions
  • Filtered Partition maintenance operations
  • Online conversion of non-partitioned to partitioned table
  • Partitioned External Tables

 

Similar to the interval partitioning method

You need to be logged in to see this part of the content. Please Login to access.

Speaker at OTN Yathra 2017

$
0
0

I am presenting the following papers at the OTN Yathra tour 2017 which will cover six cities in India – Chennai, Bangalore, Hyderabad, Pune, Mumbai and Delhi.

 

Oracle Database Multitenant – What’s New in Oracle 12c Release 2

Oracle Database 12c Release 2 presents several new features related to the Multitenant option which was introduced in 12c Release 1. The session introduces all the new exciting features related to Container and Pluggable databases which will include  Hot Cloning, Refreshable Pluggable Databases, Application Containers, PDB Flashback, PDB Lockdown Profiles, Performance Profiles and Proxy PDBs to name a few.

Upgrade and Migrate to Oracle Database 12c Release 2– best practices for minimizing downtime

Oracle database 12.2.0.1 has been released a few months ago and introduces many exciting and ground-breaking new features. However, in many cases organizations are not able to afford the outage required for such upgrades and migrations to a new release.

This session outlines the best practices which can be deployed to minimize downtime required for upgrades and discusses the pros and cons of different upgrade/migration methods and techniques like Oracle GoldenGate, Cross-platform Transportable Tablespaces, Rolling Upgrades using Transient Logical Standby Databases and Data Guard among others.

Attendees will also learn how to upgrade an Oracle 12.1.0.2 Multitenant environment with Container and Pluggable databases to Oracle 12c Release 2.
Viewing all 232 articles
Browse latest View live