Quantcast
Channel: Oracle DBA – Tips and Techniques
Viewing all 232 articles
Browse latest View live

Oracle GoldenGate 18c New Features

$
0
0

Oracle GoldenGate 18c now provides support for some new features which were introduced in Oracle database 12c – namely support for Identity Columns and In-Database Row Archival.

Identity columns enables us to specify that a column should be automatically populated from a system-created sequence which is similar to the AUTO_INCREMENT column in MySQL or IDENTITY column in SQL Server.

The Oracle 12c Information Life Cycle Management (ILM) feature called In-Database Archiving provides the database the ability to distinguish from active data and ‘older’ inactive data while at the same time storing all the data in the same database.

When we enable row archival for a table, a hidden column called ORA_ARCHIVE_STATE column is added to the table and this column is automatically assigned a value of 0 to denote current data and we can decide what data in the table is to be considered as candidates for row archiving and they are assigned the value 1

Once the older and more current data is distinguished, we can archive and compress the older data to reduce the size of the database or move that older data to a cheaper storage tier to reduce cost of storing data.

Note that Oracle GoldenGate support for these features requires Oracle Database 18c and above. It also requires usage of the Integrated Extract and Integrated Replicat or Integrated Parallel Replicat.

 
Identity Columns

Note that the IDENTITY COLUMN in the table POSITION_ID is automatically populated.
 

SQL> insert into hr.job_positions
  2  (position_name)
  3  values
  4  ('President');

1 row created.

SQL> insert into hr.job_positions
  2  (position_name)
  3   values
  4  ('Vice-President');

1 row created.

SQL>  insert into hr.job_positions
  2  (position_name)
  3   values
  4  ('Manager');

1 row created.

SQL> commit;

Commit complete.

SQL> select * from hr.job_positions;

POSITION_ID POSITION_NAME
----------- --------------------
	  1 President
	  2 Vice-President
	  3 Manager

 

Verify the extract has captured the changes
 

GGSCI (rac01.localdomain) 3> stats ext1 latest 

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2019-01-16 12:01:19.

Output to ./dirdat/ogg1/lt:

Extracting from PDB1.HR.JOB_POSITIONS to PDB1.HR.JOB_POSITIONS:

*** Latest statistics since 2019-01-16 12:00:15 ***
	Total inserts                   	           3.00
	Total updates                   	           0.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	           3.00

End of Statistics.

 
Verify replication has been performed on the target table
 


SQL>  select * from hr.job_positions;

POSITION_ID POSITION_NAME
----------- --------------------
	  1 President
	  2 Vice-President
	  3 Manager

 
 
In-Database Row Archival
 

Enable row archival for the SYSTEM.MYOBJECTS table. This table is based on the data dictionary object ALL_OBJECTS


SQL> alter table system.myobjects row archival;

Table altered.

SQL> select distinct ora_archive_state from system.myobjects;

ORA_ARCHIVE_STATE
--------------------------------------------------------------------------------
0

 

We now perform the row archival. Data older than 01-JUL-18 is considered as ‘old’ and needs to be archived. Use the ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1) clause in the UPDATE statement to achieve this row archival.

If we query the table after the archival is performed, we see that it showing now that the table has only 310 rows and not 71710 rows!
 

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
     71710

SQL> select count(*) from system.myobjects where created < '01-JUL-18';

  COUNT(*)
----------
     71400

SQL> select count(*) from system.myobjects where created > '01-JUL-18';

  COUNT(*)
----------
       310

SQL>  update system.myobjects
 set ORA_ARCHIVE_STATE=DBMS_ILM.ARCHIVESTATENAME(1)
  where created <'01-JUL-18';  2    3  

71400 rows updated.

SQL> commit;

Commit complete.

SQL> select count(*) from system.myobjects;

  COUNT(*)
----------
       310

 
Verify the extract has captured this UPDATE statement
 

GGSCI (host01.localdomain as c##oggadmin@ORCLCDB/PDB1) 19> stats ext1 latest 

Sending STATS request to EXTRACT EXT1 ...

Start of Statistics at 2019-01-19 10:37:54.

Output to ./dirdat/lt:

Extracting from PDB1.SYSTEM.MYOBJECTS to PDB1.SYSTEM.MYOBJECTS:

*** Latest statistics since 2019-01-19 10:26:27 ***
	Total inserts                   	       71710.00
	Total updates                   	       71400.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	      143110.00

End of Statistics.

 

Note that replication has also been performed on the target table and with row archival also enabled on the target table we see just 310 rows as present in the table.
 

GGSCI (host02.localdomain) 10> stats rep1 latest 

Sending STATS request to REPLICAT REP1 ...

Start of Statistics at 2019-01-19 10:43:44.


Integrated Replicat Statistics:

	Total transactions            		           2.00
	Redirected                    		           0.00
	Replicated procedures         		           0.00
	DDL operations                		           0.00
	Stored procedures             		           0.00
	Datatype functionality        		           0.00
	Event actions                 		           0.00
	Direct transactions ratio     		           0.00%

Replicating from PDB1.SYSTEM.MYOBJECTS to PDB2.SYSTEM.MYOBJECTS:

*** Latest statistics since 2019-01-19 10:43:07 ***
	Total inserts                   	       71710.00
	Total updates                   	       71400.00
	Total deletes                   	           0.00
	Total discards                  	           0.00
	Total operations                	       143110.00

End of Statistics.

SQL>  select count(*) from system.myobjects;

  COUNT(*)
----------
       310

Oracle Database 19c Sharding Hands-On Tutorial

$
0
0

Oracle Sharding is an architecture in which data is horizontally partitioned across a number of independent physical databases called shards. Think of it as one giant database partitioned into many small databases located on different servers – similar to the concept of one giant table being divided into a number of smaller partitions. Unlike the case of Oracle Partititioing where all the partitions of table are located in the same database, in Oracle Sharding the partitions of the same table are located on different databases.

All the shards together make up a single logical database, which is referred to as a sharded database or SDB.

Horizontal partitioning involves splitting a database table across the shard databases so that each shard contains the same table with the same set of columns but a different subset of rows. A table split up or partitioned in this manner is also known as a sharded table.

As far as the application is concerned a shared database looks like a single database and the number of shards and distribution of data across those shards are completely transparent to the application.

Sharding provides advantages like global distribution of data where shards are located in different geographical regions and each such sharded database has data relevant and distinct to the geographical region the shard is located in. It also provides the linear scalability of workloads, data and users as well as fault isolation where the failure of a shard will be transparent to other shards located in maybe different data centres as well as maybe different countries.

However, it should be kept in mind that applications that use sharding must have a well-defined data model and data distribution strategy and also primarily access data using a sharding key. Examples of a sharding key could be the CUSTOMER_ID or ORDER_ID columns in a shared table and this is mainly suited for OLTP applications.

In addition to the shard databases, we also have a Shard Catalog database which provides the centralized management of the shard database topology as well as performs tasks like automated shard deployment and co-ordinating multi or cross-shard queries.

The Shard Director is Global Service Manager (GSM) type network listener which provides high performance routing of application connections based on the sharding key.

We typically using the GDSCTL command line utility to manage the shard catalog as well as entire shard environment.

Data in the sharded database (SDB) is typically accessed via a global service which is defined via GDSCTL

Distribution of partitions across shards is achieved by creating partitions in tablespaces that reside on different shards. Each partition of a sharded table is stored in a separate tablespace on a separate shard based on the sharding or partition key.

A tablespace is a logical unit of data distribution in an SDB. Sharded table partitions are stored in different tablespaces.

A sharded table family is a set of tables that are sharded in the same manner and are typically tables linked by a parent-child relationship.

A chunk is a set of tablespaces that store corresponding partitions of all tables in a sharded table family. So for example take the case of a shared table family consisting of the CUSTOMERS, ORDERS and LINEITEMS. A single chunk (typically a single tablespace) will contain the relevant partitions of all the 3 tables and they will be located in the same tablespace. Having the corresponding partitions of related tables are always stored in the same shard helps minimize the number of multi-shard joins and improves shard performance.

In Oracle Sharding, tablespaces are created and managed as a unit called a tablespace set. A tablespace set consists of multiple tablespaces distributed across shards and all tablespaces in a tablespace set would have the same properties.

Sharding introduces some changes to the DDL statements like CREATE SHARDED TABLE, CREATE DUPLICATED TABLE, CREATE TABLESPACE SET and such DDL statements with this syntax can only be executed against a sharded database. We need to use the command ALTER SESSION ENABLE SHARD DDL.

Oracle 19c Sharding Hands-On Tutorial

ASM Flex Disk Groups, Quota Groups and ASM Split Mirror Cloning

$
0
0

In earlier releases we could create an ASM Disk Group which could potentially contain the data files of a number of databases. The issue was that we could not perform any storage management at the database level.

If the Disk Group redundancy was set to say HIGH (which is 3-way mirroring), then that applied to every database which had files in that particular ASM Disk Group. Maybe we had a case of test and development sharing the same ASM Disk Group as a production database and we did not wish to have this type of redundancy settings for a non-production database.

Also if a number of databases shared the same ASM Disk Group, there was no way of preventing a certain database from using all the available space in a particular disk group.

Further things like the ASM rebalance power limit could only be set at the ASM Disk Group level and maybe we would like to have a higher rebalance power limit setting for a more critical database as opposed to another database which did not require a very fast rebalance operation.

 

 

 

 

 

 

 

Starting with Oracle 12c Release 2, Oracle ASM provides database-oriented storage management with Oracle ASM flex groups, file groups and quota groups.

A new feature introduced in Oracle 18c enables a very fast method of cloning for pluggable databases called ASM Split Mirror Cloning which is based on the ASM Flex Disk Group feature.

Now the redundancy of files in a flex disk group is flexible and enables storage management at the database level. Each database has its own file group, and storage management can be done at the file group level, in addition to the disk group level (which was the only level possible earlier).

A flex disk group requires a minimum of three failure groups and the redundancy setting of a flex disk group is set to FLEX REDUNDANCY. The flex disk group can tolerate two failures which is the same as a HIGH redundancy disk group.

Starting in Oracle 18c we can also convert disk groups with NORMAL or HIGH redundancy settings (not EXTERNAL)  to flex disk groups.

 

 

In this case we use ASM Configuration Assistant (ASMCA) to create the Flex ASM Disk Group and then the File Groups and Quota Groups. Note that the Flex Disk Group needs at least 3 disks.Also now via ASMCA we can view attributes of the ASM Disk Group as well.
 

 

 

 

 

 

 

 

 

 
 

ASM SPLIT MIRROR CLONING (new Oracle 18c feature)

 
We have created a pluggable database PDB1 in the CDB named SALES. Note that each CDB and PDB is assigned its own individual filegroup.
 

SQL> select FILEGROUP_NUMBER, NAME, CLIENT_NAME, USED_QUOTA_MB, QUOTAGROUP_NUMBER from v$asm_filegroup;

FILEGROUP_NUMBER NAME		  	CLIENT_NAM   		 USED_QUOTA_MB	 QUOTAGROUP_NUMBER
---------------- --------------------	----------		 ------------- 	-----------------
	       0 DEFAULT_FILEGROUP			    	   0		     	  1
	       1 SALES_CDB$ROOT       	 SALES_CDB$ROOT	   	   5328		          1    

	       2 SALES_PDB$SEED      	 SALES_PDB$SEED	           1496		          1
				      
	       3 SALES_PDB1	     	 SALES_PDB1	  	   1744		          1


SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO

 
Prepare the Mirror Copy
 
Connect to the source pluggable database PDB1 and issue the PREPARE MIRROR COPY command.

SQL> alter session set container=pdb1;

Session altered.

SQL> ALTER PLUGGABLE DATABASE PREPARE MIRROR COPY pdb1_mirror;
ALTER PLUGGABLE DATABASE PREPARE MIRROR COPY pdb1_mirror
*
ERROR at line 1:
ORA-15283: ASM operation requires compatible.rdbms of 18.0.0.0.0 or higher


SQL> ALTER PLUGGABLE DATABASE PREPARE MIRROR COPY pdb1_mirror;

Pluggable database altered.

 
We can monitor the progress of the mirror copy being prepared. Connect to the ASM instance and query the v$asm_dbclone_info view.
 

SQL> select mirrorcopy_name, dbclone_status from v$asm_dbclone_info;

MIRRORCOPY_NAME
--------------------------------------------------------------------------------------------------------------------------------
DBCLONE_STATUS
--------------------------------------------------------------------------------------------------------------------------------
PDB1_MIRROR
PREPARING


SQL> /

MIRRORCOPY_NAME
--------------------------------------------------------------------------------------------------------------------------------
DBCLONE_STATUS
--------------------------------------------------------------------------------------------------------------------------------
PDB1_MIRROR
PREPARED

 
Split the Mirrored Copy and Create the Database Clone
 
Note that the prepare and copy step must complete before starting this step. Connect to the CBD root container and issue the CREATE PLUGGABLE DATABASE command with the USING MIRROR COPY clause.
 

 SQL> conn / as sysdba
Connected.

SQL> CREATE PLUGGABLE DATABASE pdb2 FROM pdb1 USING MIRROR COPY pdb1_mirror;

Pluggable database created.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
	 4 PDB2 			  MOUNTED

SQL> alter pluggable database pdb2 open ;

Pluggable database altered.

 
After the pluggable database has been created we can see now that the DBCLONE_STATUS column shows the value SPLIT COMPLETED.
 

SQL> select mirrorcopy_name, dbclone_status from v$asm_dbclone_info;

MIRRORCOPY_NAME
--------------------------------------------------------------------------------
DBCLONE_STATUS
--------------------------------------------------------------------------------
PDB1_MIRROR
SPLIT COMPLETED

 

SQL> alter session set container=pdb2;

Session altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+FLEX_DATA/SALES/8AB11EBA6B714BE3E0536438A8C0CE87/DATAFILE/system.279.1010302271
+FLEX_DATA/SALES/8AB11EBA6B714BE3E0536438A8C0CE87/DATAFILE/sysaux.280.1010302271
+FLEX_DATA/SALES/8AB11EBA6B714BE3E0536438A8C0CE87/DATAFILE/undotbs1.281.10103022
71

+FLEX_DATA/SALES/8AB11EBA6B714BE3E0536438A8C0CE87/DATAFILE/undo_2.284.1010302271
+FLEX_DATA/SALES/8AB11EBA6B714BE3E0536438A8C0CE87/DATAFILE/users.282.1010302271

 
Using Quota Groups
 
We create a new Quota Group and set a 2 GB limit to that quota group. We then modify the file group and assign the 2 GB limit quota group to the file group.

We then try to extend one of the data files of the PDB1 database and we see that we get an error because the PDB is trying to use more space in the ASM disk group than what has been set by the quota group.
 

SQL> ALTER DISKGROUP flex_data MODIFY FILEGROUP sales_pdb1
   SET 'quota_group' = 'Q_GRP_SALES_PDB1';

Diskgroup altered.


SQL> select FILEGROUP_NUMBER, NAME, CLIENT_NAME, USED_QUOTA_MB, QUOTAGROUP_NUMBER from v$asm_filegroup;

FILEGROUP_NUMBER 	NAME		    CLIENT_NAME        USED_QUOTA_MB 	QUOTAGROUP_NUMBER
------------------	------------------  ------------------ ------------- 	  -----------------
	       0 	DEFAULT_FILEGROUP			0		     1
	       1 	SALES_CDB$ROOT     SALES_CDB$ROOT	7120		     3
	       2 	SALES_PDB$SEED     SALES_PDB$SEED	1496		     1
	       3 	SALES_PDB1	   SALES_PDB1		1936		     4


SQL> select quotagroup_number,name,used_quota_mb, quota_limit_mb 
    from  v$asm_quotagroup;

QUOTAGROUP_NUMBER 	NAME		 USED_QUOTA_MB     QUOTA_LIMIT_MB
------------------- -------------- 	------------------ ------------- 		
		1   GENERIC		1496			0
		2   SALES_Q_GRP		0	    		12288
		3   HR_Q_GRP		4120	    	 	5120
		4   Q_GRP_SALES_PDB1	1936	    		2048

SQL> alter database datafile 10 resize 5G;
alter database datafile 10 resize 5G
*
ERROR at line 1:
ORA-01237: cannot extend datafile 10
ORA-01110: data file 10:
'+FLEX_DATA/SALES/8AAAFEAC96597F17E0536438A8C0AD2C/DATAFILE/system.274.101027602
3'
ORA-17505: ksfdrsz:1 Failed to resize file to size 655360 blocks
ORA-15437: Not enough quota available in quota group Q_GRP_SALES_PDB1.

Oracle 19c Scalable Sequences

$
0
0

Index block contention is very common databases with high insert activity and it’s especially common on tables that have monotonically increasing key values typically generated via a sequence.

Oracle B-tree indexes are “right-handed” and the right-hand leaf nodes of the B-tree contain the highest key which is located in the lowest tree level.

Index leaf node contention happens when rows are inserted based on a user generated key like via a sequence and in case of columns populated by sequence based key values, recent entries will be in the right most leaf block of the B-Tree.

This means all new rows will be stored in the right most leaf block of the index. As more and more sessions insert rows in to the table, that right most leaf block will be full.

Oracle will split that right most leaf block into two leaf blocks with one block containing all the rows except for one row and a new block with just a single row.

This type of index growth termed as “Right Handed Growth” Indexes. As more and more concurrent sessions inserts into the right most leaf block of the index, that index block becomes hot block, and concurrency on that leaf block leads to performance issues.

In an Oracle RAC database this problem is magnified and becomes a bigger bottleneck. If the Sequence cache (which is instance specific) is small (defaults to 20), then the right most leaf block becomes a hot block in not only one instance but in all the instances part of the cluster and the hot block will need to be transferred back and forth over the interconnect.

Oracle Database 19c introduces a new type of sequence called a Scalable Sequence.

Now in Oracle 19c, in those cases of data ingestion workloads with high level of concurrency, the new scalable sequence by generating unordered primary or unique key values helps in significantly reducing the sequence and index block contention caused by the right-handed indexes and thus provides better throughput, data load scalability and performance as compared to the pre -Oracle 19c solution of having to configuring a very large sequence cache using the CACHE clause of CREATE SEQUENCE or ALTER SEQUENCE statement.

The scalable sequence is made up of a sequence offset number which by default contains 6 digits

The first 3 digits is derived from the instance number with 100 added and the next 3 digits is derived from the SID of that session.

So lets say user with SID 555 and instance number is 1 – then the offset number is 101555 and for another session with SID 666 in instance 2, the offset number would be 102666.

We can create the scalable sequence with either the EXTEND of NOEXTEND option.

When the EXTEND option is specified for the SCALE clause, the scalable sequence values are of the length [X digits + Y digits], where X is the sequence offset number and Y is the number of digits specified in the MAXVALUE clause.

Let us see how that works when we create a scalable sequence with the EXTEND option.

Note the different (unrelated and unordered) values which are generated by the sequence in each instance.

 

Instance 1

SQL> create sequence system.scale_ext_seq 
  2  start with 1 increment by 1
  3  maxvalue 100
  4  scale extend;

Sequence created.

SQL> select system.scale_ext_seq.nextval from dual;

   NEXTVAL
----------
 101007001


SQL> select sid from v$mystat where rownum = 1;

       SID
----------
	 7

Instance 2

SQL> select system.scale_ext_seq.nextval from dual; 

   NEXTVAL
----------
 102036021

SQL> select sid from v$mystat where rownum = 1;

       SID
----------
	36

When the NOEXTEND option is specified for the SCALE clause the number of digits in the scalable sequence cannot exceed the number of digits specified in the MAXVALUE clause.

Note what happens when the number of digits in the sequence exceeds 7 (number of digits in MAXVALUE clause 1000000).

 

SQL>create sequence system.scale_noext_seq 
  start with 1 increment by 1
  maxvalue 1000000
  scale noextend;  

Sequence created.

INSTANCE 1

SQL> select system.scale_noext_seq.nextval from dual;

   NEXTVAL
----------
   1010071

SQL> /

   NEXTVAL
----------
   1010072

SQL> /

   NEXTVAL
----------
   1010073

...
...

   NEXTVAL
----------
   1010078

SQL> /

   NEXTVAL
----------
   1010079

SQL> /
select system.scale_noext_seq.nextval from dual
*
ERROR at line 1:
ORA-64603: NEXTVAL cannot be instantiated for SCALE_NOEXT_SEQ. Widen the
sequence by 1 digits or alter sequence with SCALE EXTEND.


SQL> alter sequence system.scale_noext_seq maxvalue 10000000;

Sequence altered.

SQL> select system.scale_noext_seq.nextval from dual;

   NEXTVAL
----------
  10100741



INSTANCE 2


SQL> select system.scale_noext_seq.nextval from dual;

   NEXTVAL
----------
  10203661

Oracle 19c AutoUpgrade Utility (Part 1)

$
0
0

AutoUpgrade utility is a new feature in Oracle 19c and is designed to automate the upgrade process – this includes not just the database upgrade but also automating both the pre-upgrade as well as post-upgrade steps.

Consider a case where as a DBA you have not one but hundreds of database which need to be upgraded and until now the only option we had was to upgrade each of these databases either manually or via the DBUA utility.

All we need to do in case of the new 19c autoupgrade utility is to create a configuration file which contains the details of the databases which need to be upgraded and then use the java based autoupgrade.jar file.

The autoupgrade.jar file is available in the Oracle 19c database software in the $ORACLE_HOME/rdbms/admin directory. The recommendation however is to use the autoupgrade.jar file which can be downloaded from the MOS note 2485457.1.

The utility requires Java 8 and we can use the Java 8 available in the Oracle 19c database software home.

[oracle@host02 admin]$ pwd
/u01/app/oracle/product/19.3.0/dbhome_1/rdbms/admin

[oracle@host02 admin]$ export ORACLE_HOME=/u01/app/oracle/product/19.3.0/dbhome_1

[oracle@host02 admin]$ $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -version
build.version 20190207
build.date 2019/02/07 12:35:56
build.label RDBMS_PT.AUTOUPGRADE_LINUX.X64_190205.1800

 

Note the difference in the version of the autoupgrade.jar file which is downloaded from MOS.

[oracle@host02 sf_software]$ cp autoupgrade.jar /home/oracle

[oracle@host02 sf_software]$ cd /home/oracle

[oracle@host02 ~]$ $ORACLE_HOME/jdk/bin/java -jar autoupgrade.jar -version
build.version 20190513
build.date 2019/05/13 16:59:48

 

The autoupgrade utility can be run in a number of different modes.

Analyze

Performs a read-only pre-upgrade analysis of databases before upgrade and identifies any issues which might prevent a successful upgrade. We can run AutoUpgrade in Analyze mode on the source Oracle Database home during normal database operation.

The Analyze mode produces a report which identifies upgrade issues and possible errors that would occur if we do not correct them, either by running an automatic fixup script, or by manual corrective action.

Fixup

In Fixup mode, AutoUpgrade performs the same checks that it also performs in Analyze mode but after completing these pre-upgrade checks it then runs automated fixups of the source database in preparation for the database upgrade.

Deploy

In the Deploy mode, the autoupgrade utility performs the actual upgrade of the database as well as also performs a number of post-upgrade steps like recompile invalid objects and timezone DST upgrade among other things.

This is an example of a configuration file I will use to upgrade two 12.2 databases to Oracle 19c.

[oracle@host01 admin]$ cat /tmp/config.txt 
#
# Global logging directory pertains to all jobs
#
global.autoupg_log_dir=/u02/app/oracle/autoupgrade        # Top level logging directory (Required)

#
# Database 1
#
upg1.dbname=db1                                
upg1.source_home=/u02/app/oracle/product/12.2.0/dbhome_1 
upg1.target_home=/u01/app/oracle/product/19.3.0/dbhome_1 
upg1.sid=db1                                  
upg1.start_time=09/06/2019 17:30:00                     
upg1.log_dir=/u02/app/oracle/autoupgrade/db1
upg1.upgrade_node=localhost                   
upg1.run_utlrp=yes  
upg1.timezone_upg=yes 
upg1.target_version=12.2

#
# Database 2
#


upg2.dbname=db2                            # Database Name (Required)
upg2.source_home=/u02/app/oracle/product/12.2.0/dbhome_1 # Source Home (Required)
upg2.target_home=/u01/app/oracle/product/19.3.0/dbhome_1 # Target home (Required)
upg2.sid=db2                                 # Oracle Sid (Required)
upg2.start_time=09/06/2019 19:30:00                            # Start time of the operation (Required)
upg2.log_dir=/u02/app/oracle/autoupgrade/db2            # Local logging directory (Required)
upg2.upgrade_node=localhost                    # Upgrade node that operation will run on (Required)
upg2.run_utlrp=yes  # yes(default) to run utlrp as part of upgrade, no to skip it (Optional)
upg2.timezone_upg=yes # yes(default) to upgrade timezone if needed, no to skip it (Optional)
upg2.target_version=12.2                      # Oracle Home Target version number (Required)

Execute autoupgrade in ANALYZE mode

The Autoupgrade Console enables us to monitor as well as manage and control the jobs started by the autoupgrade utility.

For example we use the ‘lsj’ command while in the console prompt which will list the running upgrade jobs along with the progress and status.

 

[oracle@host02 autoupgrade]$ $ORACLE_HOME/jdk/bin/java -jar /home/oracle/autoupgrade.jar -config /tmp/config.txt -mode analyze 
Autoupgrade tool launched with default options
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
2 databases will be analyzed
Type 'help' to list console commands
upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
| 104|    DB1|PRECHECKS|PREPARING| RUNNING|19/06/09 16:57|     N/A|16:57:45|Remaining 49/71|
| 105|    DB2|    SETUP|PREPARING|FINISHED|19/06/09 16:57|     N/A|16:57:34|      Scheduled|
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
Total jobs 2

upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
| 104|    DB1|PRECHECKS|PREPARING| RUNNING|19/06/09 16:57|     N/A|16:57:51|Remaining 15/71|
| 105|    DB2|    SETUP|PREPARING|FINISHED|19/06/09 16:57|     N/A|16:57:34|      Scheduled|
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
Total jobs 2

upg> tasks
+---+--------------+-------------+
| ID|          NAME|         Job#|
+---+--------------+-------------+
|  1|          main|      WAITING|
| 30|      jobs_mon|      WAITING|
| 31|       console|     RUNNABLE|
| 32|  queue_reader|      WAITING|
| 33|         cmd-0|      WAITING|
| 44| job_manager-0|      WAITING|
| 47|    event_loop|TIMED_WAITING|
| 48|    bqueue-104|      WAITING|
| 50|    checks-104|      WAITING|
| 51|rep_checks-104|TIMED_WAITING|
|105|    db1-puic-0|      WAITING|
|106|    db1-puic-1|      WAITING|
|170|      quickSQL|     RUNNABLE|
+---+--------------+-------------+

upg> status 
---------------- Config -------------------
User configuration file    [/tmp/config.txt]
General logs location      [/u02/app/oracle/autoupgrade        # Top level logging directory (Required)/cfgtoollogs/upgrade/auto]
Mode                       [ANALYZE]
DB upg fatal errors        ORA-00600,ORA-07445
DB Post upgrade abort time [60] minutes
DB upg abort time          [1440] minutes
DB restore abort time      [120] minutes
DB drop GRP abort time     [3] minutes
------------------------ Jobs ------------------------
Total databases in configuration file [2]
Total Non-CDB being processed         [2]
Total CDB being processed             [0]
Jobs finished successfully            [0]
Jobs finished/aborted                 [0]
jobs in progress                      [2]
------------ Resources ----------------
Threads in use                        [21]
JVM used memory                       [41] MB
CPU in use                            [13%]
Processes in use                      [14]

upg> Job 104 completed
Job 105 completed
------------------- Final Summary --------------------
Number of databases            [ 2 ]

Jobs finished successfully     [2]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 104 FOR DB1
Job 105 FOR DB2

[oracle@host02 autoupgrade]$ 

 
Note the log files which have been created for each database – as we have only run autoupgrade with the Analyze option, the only directory which is created is prechecks directory.

For each database which has been analyzed, we can review the HTML file which lists the pre-check warnings and recommendations.
 

[oracle@host01 prechecks]$ pwd
/u02/app/oracle/autoupgrade/db1/db1/104/prechecks
[oracle@host01 prechecks]$ ls -l
total 180
-rwx------. 1 oracle oinstall   1967 May 21 00:29 db1_checklist.cfg
-rwx------. 1 oracle oinstall   1616 May 21 00:29 db1_checklist.json
-rwx------. 1 oracle oinstall   1892 May 21 00:29 db1_checklist.xml
-rwx------. 1 oracle oinstall  23354 May 21 00:29 db1_preupgrade.html
-rwx------. 1 oracle oinstall   7619 May 21 00:29 db1_preupgrade.log
-rwx------. 1 oracle oinstall 138146 May 21 00:29 prechecks_db1.log



[oracle@host01 prechecks]$ pwd
/u02/app/oracle/autoupgrade/db2/db2/105/prechecks
[oracle@host01 prechecks]$ ls -lrt 
total 180
-rwx------. 1 oracle oinstall 138147 May 21 00:29 prechecks_db2.log
-rwx------. 1 oracle oinstall   1901 May 21 00:29 db2_checklist.xml
-rwx------. 1 oracle oinstall   1976 May 21 00:29 db2_checklist.cfg
-rwx------. 1 oracle oinstall   7543 May 21 00:29 db2_preupgrade.log
-rwx------. 1 oracle oinstall   1625 May 21 00:29 db2_checklist.json
-rwx------. 1 oracle oinstall  23230 May 21 00:29 db2_preupgrade.html

 
 


 
 
View the db1_preupgrade.html file ….

Oracle 19c Autoupgrade Utility (Part 2)

$
0
0

In AutoUpgrade 19c Part 1, we executed Autoupgrade in ANALYZE mode which performed a read-only check of the database and returned a report which highlighted any warnings or potential errors which might occur in the database upgrade as well as provided some recommendations.

When AutoUpgrade is executed in FIXUP mode, it performs the checks that it also performs while in Analyze mode and after completing these checks, AutoUpgrade then performs all automated fixup tasks that are required to fix the earlier release source database before before the upgrade is commenced.

Note that AutoUpgrade does not create a restore point while running in Fixup mode (this is only done in the Upgrade mode) – so it is recommended to take a backup or create a manual GRP before running Autoupgrade in fixup mode.
 

In the prechecks folder there will be SID_checklist.cfg file which will contain the prechecks which will be performed and for which precheck a corresponding fixup exists.
 

dbname]          [DB1]
==========================================
[container]          [DB1]
==========================================
[checkname]          DICTIONARY_STATS
[stage]              PRECHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           RECOMMEND
----------------------------------------------------

[checkname]          POST_DICTIONARY
[stage]              POSTCHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           RECOMMEND
----------------------------------------------------

[checkname]          POST_FIXED_OBJECTS
[stage]              POSTCHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           RECOMMEND
----------------------------------------------------

[checkname]          PRE_FIXED_OBJECTS
[stage]              PRECHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           RECOMMEND
----------------------------------------------------

[checkname]          OLD_TIME_ZONES_EXIST
[stage]              POSTCHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           WARNING
----------------------------------------------------

[checkname]          PARAMETER_MIN_VAL
[stage]              PRECHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           WARNING
----------------------------------------------------

[checkname]          MANDATORY_UPGRADE_CHANGES
[stage]              PRECHECKS
[fixup_available]    YES
[runfix]             YES
[severity]           INFO
----------------------------------------------------

[checkname]          RMAN_RECOVERY_VERSION
[stage]              PRECHECKS
[fixup_available]    NO
[runfix]             N/A
[severity]           INFO
----------------------------------------------------

[checkname]          TABLESPACES_INFO
[stage]              PRECHECKS
[fixup_available]    NO
[runfix]             N/A
[severity]           INFO
----------------------------------------------------

 
Execute Autoupgrade in FIXUP mode
 

[oracle@host02 bin]$ $ORACLE_HOME/jdk/bin/java -jar /home/oracle/autoupgrade.jar -config /tmp/config.txt -mode fixups
Autoupgrade tool launched with default options
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
2 databases will be processed
Type 'help' to list console commands
upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
| 106|    DB1|PRECHECKS|PREPARING| RUNNING|19/06/10 18:37|     N/A|18:37:07|Loading DB info|
| 107|    DB2|    SETUP|PREPARING|FINISHED|19/06/10 18:37|     N/A|18:37:06|      Scheduled|
+----+-------+---------+---------+--------+--------------+--------+--------+---------------+
Total jobs 2

upg> status 
---------------- Config -------------------
User configuration file    [/tmp/config.txt]
General logs location      [/u02/app/oracle/autoupgrade        # Top level logging directory (Required)/cfgtoollogs/upgrade/auto]
Mode                       [FIXUPS]
DB upg fatal errors        ORA-00600,ORA-07445
DB Post upgrade abort time [60] minutes
DB upg abort time          [1440] minutes
DB restore abort time      [120] minutes
DB drop GRP abort time     [3] minutes
------------------------ Jobs ------------------------
Total databases in configuration file [2]
Total Non-CDB being processed         [2]
Total CDB being processed             [0]
Jobs finished successfully            [0]
Jobs finished/aborted                 [0]
jobs in progress                      [2]
------------ Resources ----------------
Threads in use                        [19]
JVM used memory                       [26] MB
CPU in use                            [13%]
Processes in use                      [19]

upg> tasks
+---+--------------+-------------+
| ID|          NAME|         Job#|
+---+--------------+-------------+
|  1|          main|      WAITING|
| 30|      jobs_mon|      WAITING|
| 31|       console|     RUNNABLE|
| 32|  queue_reader|      WAITING|
| 33|         cmd-0|      WAITING|
| 44| job_manager-0|      WAITING|
| 47|    event_loop|TIMED_WAITING|
| 48|    bqueue-106|      WAITING|
| 49|    checks-106|      WAITING|
| 50|rep_checks-106|TIMED_WAITING|
|104|    db1-puic-0|      WAITING|
|105|    db1-puic-1|      WAITING|
|169|      quickSQL|     RUNNABLE|
|171|      quickSQL|     RUNNABLE|

+---+--------------+-------------+
upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+---------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|  MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+---------+
| 106|    DB1|PREFIXUPS|EXECUTING| RUNNING|19/06/10 18:37|     N/A|18:37:34|         |
| 107|    DB2|    SETUP|PREPARING|FINISHED|19/06/10 18:37|     N/A|18:37:06|Scheduled|
+----+-------+---------+---------+--------+--------------+--------+--------+---------+
Total jobs 2

upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+-------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|      MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+-------------+
| 106|    DB1|PREFIXUPS|EXECUTING| RUNNING|19/06/10 18:37|     N/A|18:37:42|Remaining 5/5|
| 107|    DB2|    SETUP|PREPARING|FINISHED|19/06/10 18:37|     N/A|18:37:06|    Scheduled|
+----+-------+---------+---------+--------+--------------+--------+--------+-------------+
Total jobs 2

upg> lsj
+----+-------+---------+---------+--------+--------------+--------+--------+-------------+
|Job#|DB_NAME|    STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|      MESSAGE|
+----+-------+---------+---------+--------+--------------+--------+--------+-------------+
| 106|    DB1|PREFIXUPS|EXECUTING| RUNNING|19/06/10 18:37|     N/A|18:37:42|Remaining 5/5|
| 107|    DB2|    SETUP|PREPARING|FINISHED|19/06/10 18:37|     N/A|18:37:06|    Scheduled|
+----+-------+---------+---------+--------+--------------+--------+--------+-------------+
Total jobs 2

upg> Job 106 completed
Job 107 completed
------------------- Final Summary --------------------
Number of databases            [ 2 ]

Jobs finished successfully     [2]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 106 FOR DB1
Job 107 FOR DB2

[oracle@host02 bin]$ 

 

If we query the LAST_ANALYZED column we can see that the data dictionary statistics are current and have been gathered by the Autoupgrade fixup jobs which got executed.
 

SQL> select max(last_analyzed) from dba_tables where owner='SYS'
  2  and table_name='ACCESS$';

MAX(LAST_
---------
10-JUN-19


SQL> prompt 'Statistics for Fixed Objects'
select NVL(TO_CHAR(last_analyzed, 'YYYY-Mon-DD'), 'NO STATS') last_analyzed, COUNT(*) fixed_objects
FROM dba_tab_statistics
WHERE object_type = 'FIXED TABLE'
GROUP BY TO_CHAR(last_analyzed, 'YYYY-Mon-DD')
ORDER BY 1 DESC;

SQL> 'Statistics for Fixed Objects'
 
LAST_ANALYZED FIXED_OBJECTS
------------- -------------
NO STATS		152
2019-Jun-10	       1137

Oracle 19c New Feature AutoUpgrade Utility (Part 3)

$
0
0

The AutoUpgrade feature automates each step of a typical upgrade process and enables us to perform an upgrade with as little human intervention as possible.

AutoUpgrade is based on a configuration file which contains details of the databases we want to upgrade and we also have an AutoUpgrade job manager which executes the various upgrade related jobs depending on which phase or mode AutoUpgrade is executed with.

The autoupgrade.jar file which is used by autoupgrade utility is part of Oracle 19c software by default located in $ORACLE_HOME/rdbms/admin – note that the file is not present in lower versions. We have to download this from the MOS note (2485457.1) and we can use this utility to automate upgrades from 12c R2 to 18c as well -not limited to just Oracle 19c.

In AutoUpgrade 19c Part 1, we executed Autoupgrade in ANALYZE mode which performed a read-only check of the database and returned a report which highlighted any warnings or potential errors which might occur in the database upgrade as well as provided some recommendations.

In AutoUpgrade 19c Part 2 we executed Autoupgrade in FIXUP mode where it not only performs the checks that it performs while in Analyze mode, but after completing these checks, AutoUpgrade then performs all automated fixup tasks that are required to fix the earlier release source database before before the upgrade is commenced.

The AutoUpgrade DEPLOY processing mode performs the actual upgrade of the database as well as performing all the steps performed in the Analyze and Fixup phases discussed in Part 1 and Part 2. Basically in Deploy mode, AutoUpgrade runs all upgrade tasks on the database from pre-upgrade source database analysis to post-upgrade checks.

While we had earlier executed the Analyze and Fixup phases for two databases, DB1 and DB2, we are only running the upgrade for one of the databases (DB1) – so we need to amend the /tmp/config.txt file accordingly.
 
Execute AutoUpgrade in DEPLOY mode
 

[oracle@host02 prechecks]$ /u01/app/oracle/product/19.3.0/dbhome_1/jdk/bin/java -jar /home/oracle/autoupgrade.jar -config /tmp/config.txt -mode deploy
Autoupgrade tool launched with default options
+--------------------------------+
| Starting AutoUpgrade execution |
+--------------------------------+
1 databases will be processed
Type 'help' to list console commands

 

Executing Prechecks
 

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
| 100|    DB1|PRECHECKS|PREPARING|RUNNING|19/06/12 12:10|     N/A|12:10:04|Loading DB info|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
Total jobs 1

 
Running Fixup jobs
 

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
| 100|    DB1|PREFIXUPS|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:10:41|Loading DB info|
+----+-------+---------+---------+-------+--------------+--------+--------+---------------+
Total jobs 1

 
Drain phase – copy Wallet (if exists) and shut down database
 

upg> lsj
+----+-------+-----+---------+-------+--------------+--------+--------+-------+
|Job#|DB_NAME|STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|MESSAGE|
+----+-------+-----+---------+-------+--------------+--------+--------+-------+
| 100|    DB1|DRAIN|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:10:57|       |
+----+-------+-----+---------+-------+--------------+--------+--------+-------+
Total jobs 1

 
Start the database upgrade
 

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+-------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+-------+
| 100|    DB1|DBUPGRADE|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:11:17|Running|
+----+-------+---------+---------+-------+--------------+--------+--------+-------+
Total jobs 1

 
Check the status of the job
 

upg> status
---------------- Config -------------------
User configuration file    [/tmp/config.txt]
General logs location      [/u02/app/oracle/autoupgrade/new        # Top level logging directory (Required)/cfgtoollogs/upgrade/auto]
Mode                       [DEPLOY]
DB upg fatal errors        ORA-00600,ORA-07445
DB Post upgrade abort time [60] minutes
DB upg abort time          [1440] minutes
DB restore abort time      [120] minutes
DB drop GRP abort time     [3] minutes
------------------------ Jobs ------------------------
Total databases in configuration file [1]
Total Non-CDB being processed         [1]
Total CDB being processed             [0]
Jobs finished successfully            [0]
Jobs finished/aborted                 [0]
jobs in progress                      [1]
------------ Resources ----------------
Threads in use                        [20]
JVM used memory                       [32] MB
CPU in use                            [13%]
Processes in use                      [18]

upg> tasks
+---+------------------+-------------+
| ID|              NAME|         Job#|
+---+------------------+-------------+
|  1|              main|      WAITING|
| 20|          jobs_mon|      WAITING|
| 21|           console|     RUNNABLE|
| 22|      queue_reader|      WAITING|
| 23|             cmd-0|      WAITING|
| 29|     job_manager-0|      WAITING|
| 31|        event_loop|TIMED_WAITING|
| 32|        bqueue-100|      WAITING|
|344|         exec_loop|      WAITING|
|350|       monitor_db1|TIMED_WAITING|
|351|        catctl_db1|      WAITING|
|352| abort_monitor_db1|TIMED_WAITING|
|353|        async_read|     RUNNABLE|
+---+------------------+-------------+

 
Check the status of the database upgrade – note %Upgraded value is changing
 

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|    MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
| 100|    DB1|DBUPGRADE|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:11:44|8%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+--------+-----------+
Total jobs 1

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 100|    DB1|DBUPGRADE|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:19:38|19%Upgraded |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1

 
Upgrade is now complete – recompiling invalid objects
 

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 100|    DB1|DBUPGRADE|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:51:24|90%Compiled |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1

upg> lsj
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
|Job#|DB_NAME|    STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|     MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
| 100|    DB1|DBUPGRADE|EXECUTING|RUNNING|19/06/12 12:10|     N/A|12:51:24|90%Compiled |
+----+-------+---------+---------+-------+--------------+--------+--------+------------+
Total jobs 1

 

Running post upgrade Fixup jobs like upgrade Timezone DST and restarting the database
 

upg> lsj
+----+-------+----------+---------+-------+--------------+--------+--------+-------------+
|Job#|DB_NAME|     STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|      MESSAGE|
+----+-------+----------+---------+-------+--------------+--------+--------+-------------+
| 100|    DB1|POSTFIXUPS|EXECUTING|RUNNING|19/06/12 12:10|     N/A|13:00:40|Remaining 1/3|
+----+-------+----------+---------+-------+--------------+--------+--------+-------------+
Total jobs 1

upg> lsj
+----+-------+----------+---------+-------+--------------+--------+--------+---------------+
|Job#|DB_NAME|     STAGE|OPERATION| STATUS|    START_TIME|END_TIME| UPDATED|        MESSAGE|
+----+-------+----------+---------+-------+--------------+--------+--------+---------------+
| 100|    DB1|POSTFIXUPS|EXECUTING|RUNNING|19/06/12 12:10|     N/A|13:03:01|Loading DB info|
+----+-------+----------+---------+-------+--------------+--------+--------+---------------+
Total jobs 1

upg> lsj
+----+-------+-----------+---------+--------+--------------+--------+--------+-------------+
|Job#|DB_NAME|      STAGE|OPERATION|  STATUS|    START_TIME|END_TIME| UPDATED|      MESSAGE|
+----+-------+-----------+---------+--------+--------------+--------+--------+-------------+
| 100|    DB1|POSTUPGRADE|EXECUTING|FINISHED|19/06/12 12:10|     N/A|13:03:58|RESTARTING_DB|
+----+-------+-----------+---------+--------+--------------+--------+--------+-------------+
Total jobs 1

upg> Job 100 completed
------------------- Final Summary --------------------
Number of databases            [ 1 ]

Jobs finished successfully     [1]
Jobs failed                    [0]
Jobs pending                   [0]
------------- JOBS FINISHED SUCCESSFULLY -------------
Job 100 FOR DB1

[oracle@host02 prechecks]$ 

 
Note oratab file entry has been updated for database db1
 

[oracle@host02 2019_06_12]$ cat /etc/oratab
#
# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.

# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME::
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
db1:/u01/app/oracle/product/19.3.0/dbhome_1:N
db2:/u02/app/oracle/product/12.2.0/dbhome_1:N
[oracle@host02 2019_06_12]$ 

 
Note the various upgrade directories and log files for each stage
 

[oracle@host02 100]$ ls -l
total 744
-rwx------ 1 oracle oinstall 716018 Jun 12 13:04 autoupgrade_20190612.log
-rwx------ 1 oracle oinstall   8389 Jun 12 13:03 autoupgrade_20190612_user.log
-rwx------ 1 oracle oinstall      0 Jun 12 12:10 autoupgrade_err.log
drwx------ 2 oracle oinstall   4096 Jun 12 12:55 dbupgrade
drwx------ 2 oracle oinstall   4096 Jun 12 12:11 drain
drwx------ 2 oracle oinstall   4096 Jun 12 12:57 postchecks
drwx------ 2 oracle oinstall   4096 Jun 12 13:03 postfixups
drwx------ 2 oracle oinstall   4096 Jun 12 13:03 postupgrade
drwx------ 2 oracle oinstall   4096 Jun 12 12:10 prechecks
drwx------ 2 oracle oinstall   4096 Jun 12 12:10 prefixups
drwx------ 2 oracle oinstall   4096 Jun 12 12:10 preupgrade
[oracle@host02 100]$ 

 

View the Upgrade Summary report
 

[oracle@host02 dbupgrade]$ pwd
/u02/app/oracle/autoupgrade/db1/db1/100/dbupgrade
[oracle@host02 dbupgrade]$ ls -l
total 71464
-rwx------ 1 oracle oinstall    12129 Jun 12 12:55 autoupgrade20190612121003db1.log
-rwx------ 1 oracle oinstall 49061634 Jun 12 12:55 catupgrd20190612121003db10.log
-rwx------ 1 oracle oinstall  8551109 Jun 12 12:45 catupgrd20190612121003db11.log
-rwx------ 1 oracle oinstall  6557311 Jun 12 12:45 catupgrd20190612121003db12.log
-rwx------ 1 oracle oinstall  8833106 Jun 12 12:45 catupgrd20190612121003db13.log
-rwx------ 1 oracle oinstall      532 Jun 12 12:11 catupgrd20190612121003db1_catcon_18422.lst
-rwx------ 1 oracle oinstall        0 Jun 12 12:40 catupgrd20190612121003db1_datapatch_upgrade.err
-rwx------ 1 oracle oinstall     1303 Jun 12 12:43 catupgrd20190612121003db1_datapatch_upgrade.log
-rwx------ 1 oracle oinstall    38515 Jun 12 12:46 catupgrd20190612121003db1_stderr.log
-rwx------ 1 oracle oinstall    31341 Jun 12 12:55 db1_autocompile20190612121003db10.log
-rwx------ 1 oracle oinstall      546 Jun 12 12:47 db1_autocompile20190612121003db1_catcon_25152.lst
-rwx------ 1 oracle oinstall     2070 Jun 12 12:55 db1_autocompile20190612121003db1_stderr.log
-rwx------ 1 oracle oinstall     4187 Jun 12 12:43 during_upgrade_pfile_catctl.ora
-rwx------ 1 oracle oinstall    32574 Jun 12 12:11 phase.log
-rwx------ 1 oracle oinstall     1728 Jun 12 12:55 upg_summary.log
-rwx------ 1 oracle oinstall       46 Jun 12 12:55 upg_summary_report.log
-rwx------ 1 oracle oinstall      423 Jun 12 12:55 upg_summary_report.pl
[oracle@host02 dbupgrade]$ cat upg_summary.log

Oracle Database Release 19 Post-Upgrade Status Tool    06-12-2019 12:45:3
Database Name: DB1

Component                               Current         Full     Elapsed Time
Name                                    Status          Version  HH:MM:SS

Oracle Server                          UPGRADED      19.3.0.0.0  00:14:19
JServer JAVA Virtual Machine           UPGRADED      19.3.0.0.0  00:01:17
Oracle XDK                             UPGRADED      19.3.0.0.0  00:01:03
Oracle Database Java Packages          UPGRADED      19.3.0.0.0  00:00:13
OLAP Analytic Workspace                UPGRADED      19.3.0.0.0  00:00:16
Oracle Label Security                  UPGRADED      19.3.0.0.0  00:00:06
Oracle Database Vault                  UPGRADED      19.3.0.0.0  00:00:19
Oracle Text                            UPGRADED      19.3.0.0.0  00:00:35
Oracle Workspace Manager               UPGRADED      19.3.0.0.0  00:00:40
Oracle Real Application Clusters       UPGRADED      19.3.0.0.0  00:00:00
Oracle XML Database                    UPGRADED      19.3.0.0.0  00:01:40
Oracle Multimedia                      UPGRADED      19.3.0.0.0  00:00:45
Spatial                                UPGRADED      19.3.0.0.0  00:06:02
Oracle OLAP API                        UPGRADED      19.3.0.0.0  00:00:13
Datapatch                                                        00:03:50
Final Actions                                                    00:03:57
Post Upgrade                                                     00:00:12

Total Upgrade Time: 00:32:15

Database time zone version is 26. It is older than current release time
zone version 32. Time zone upgrade is needed using the DBMS_DST package.

Grand Total Upgrade Time:    [0d:0h:44m:25s]
[oracle@host02 dbupgrade]$ 

 

View the post upgrade fixup log file
 

cat postfixups_db1.log 
...
...

temp/sqlsessend.sql][DB1] - old_time_zones_exist$UpgTarget.call 
2019-06-12 13:02:57.333 INFO Executing SQL [@/u02/app/oracle/autoupgrade/new/db1/db1/temp/sqlsessend.sql
] in [DB1, container:DB1] - ExecuteSql.sendSqlCmdToSqlPlus 
2019-06-12 13:02:57.383 INFO End Reading process Output Stream - ReadInputStream.run 
2019-06-12 13:02:57.383 INFO Begin Closing File /u02/app/oracle/autoupgrade/new/db1/db1/temp/DB1/TimeZone/DB1.log - ReadInputStream.run 
2019-06-12 13:02:57.383 INFO End Closing File /u02/app/oracle/autoupgrade/new/db1/db1/temp/DB1/TimeZone/DB1.log - ReadInputStream.run 
2019-06-12 13:02:57.383 INFO Finished - ReadInputStream.run 
2019-06-12 13:02:57.384 INFO Complete [/u02/app/oracle/autoupgrade/new/db1/db1/temp/sqlsessend.sql][DB1] - old_time_zones_exist$UpgTarget.call 
2019-06-12 13:02:57.384 INFO Looking for error in log file after /u02/app/oracle/autoupgrade/new/db1/db1/temp/sqlsessend.sql execution on [DB1] - old_time_zones_exist$UpgTarget.call 
2019-06-12 13:02:57.384 INFO Closing sqlplus with exitValue 0 [DB1] - old_time_zones_exist$UpgTarget.call 
2019-06-12 13:02:57.384 INFO The Timezone upgrade has finished for [DB1] - old_time_zones_exist.NonCDBTimeZoneUpg 
2019-06-12 13:02:57.384 INFO Finished - old_time_zones_exist.NonCDBTimeZoneUpg 
2019-06-12 13:02:57.384 INFO Finished FIXUP [OLD_TIME_ZONES_EXIST][DB1][SUCCESSFUL] - DBUpgradeInspector$FixUpTrigger.executeFixUp 

 

View the post upgrade log file
 

[oracle@host02 postupgrade]$ cat postupgrade.log 
2019-06-12 13:03:58.001 INFO Deserializing /u02/app/oracle/autoupgrade/new        # Top level logging directory (Required)/cfgtoollogs/upgrade/auto/config_files/dbstate_DB1 file from {1} - DBState.deserialize 
2019-06-12 13:03:58.031 INFO 
DataBase Name:db1
Sid Name     :db1
Source Home  :/u02/app/oracle/product/12.2.0/dbhome_1
Target Home  :/u01/app/oracle/product/19.3.0/dbhome_1
 - PostActions. 
2019-06-12 13:03:58.031 INFO Executing PostUpgrade - AutoUpgPostActions.runPostActions 
2019-06-12 13:03:58.039 INFO Starting - PostActions.runPostActions 
2019-06-12 13:03:58.042 INFO Starting - PostActions.upgPostActionsDriver 
2019-06-12 13:03:58.043 INFO Starting - Oratab.updateOraTab 
2019-06-12 13:03:58.043 INFO Begin Updating oratab /etc/oratab - Oratab.updateOraTab 
2019-06-12 13:03:58.045 INFO Updating oratab file /etc/oratab completed with success - Oratab.updateOraTab 
2019-06-12 13:03:58.053 INFO Starting - NetworkFiles.copyNetworkFiles 
2019-06-12 13:03:58.053 INFO Begin Copying network files - NetworkFiles.copyNetworkFiles 
2019-06-12 13:03:58.084 INFO File /u02/app/oracle/product/12.2.0/dbhome_1/network/admin/listener.ora does not exist - NetworkFiles.copyFile 
2019-06-12 13:03:58.084 INFO Copying/merging file listener.ora ended - NetworkFiles.copyFile 
2019-06-12 13:03:58.085 INFO IFILE /u02/app/oracle/product/12.2.0/dbhome_1/network/admin/tnsnames.ora not found, skipping... - NetworkFiles.processIFile 
2019-06-12 13:03:58.085 INFO File /u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora.tmp does not exist - NetworkFiles.copyFile 
2019-06-12 13:03:58.085 INFO Copying/merging file /u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora.tmp ended - NetworkFiles.copyFile 
2019-06-12 13:03:58.085 INFO File /u02/app/oracle/product/12.2.0/dbhome_1/network/admin/sqlnet.ora does not exist - NetworkFiles.copyFile 
2019-06-12 13:03:58.085 INFO Copying/merging file sqlnet.ora ended - NetworkFiles.copyFile 
2019-06-12 13:03:58.085 INFO End Copying network files - NetworkFiles.copyNetworkFiles 
2019-06-12 13:03:58.085 INFO Finished - NetworkFiles.copyNetworkFiles 
2019-06-12 13:03:58.096 INFO Starting - PasswordFile.copyPasswordFile 
2019-06-12 13:03:58.096 INFO Begin Copying Password File - PasswordFile.copyPasswordFile 
2019-06-12 13:03:58.100 INFO Copying password file from /u02/app/oracle/product/12.2.0/dbhome_1/dbs/orapwdb1 to /u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwdb1 - PasswordFile.copyPasswordFile 
2019-06-12 13:03:58.107 INFO Copying password file completed with success - PasswordFile.copyPasswordFile 
2019-06-12 13:03:58.107 INFO End Copying Password File - PasswordFile.copyPasswordFile 
2019-06-12 13:03:58.107 INFO Finished - PasswordFile.copyPasswordFile 
2019-06-12 13:03:58.110 INFO Resetting DB state - DBState.resetStateOnTarget 
2019-06-12 13:03:58.110 INFO Resetting the CONCURRENT DBMS_STATS preference for DB1 - ConcurrentStat.resetConcurrentValue 
2019-06-12 13:03:58.110 INFO The CONCURRENT DBMS_STATS preference for DB1 was already set to OFF. - ConcurrentStat.resetConcurrentValue 
2019-06-12 13:03:58.112 INFO Return status is SUCCESS - PostActions.writeStatusLog 
2019-06-12 13:03:58.121 INFO Update of oratab [DB1]
	[/etc/oratab] [SUCCESS] [None]

Network Files [DB1]
	[/u01/app/oracle/product/19.3.0/dbhome_1/network/admin/tnsnames.ora] [SUCCESS] [None]
	[/u01/app/oracle/product/19.3.0/dbhome_1/network/admin/listener.ora] [SUCCESS] [None]
	[/u01/app/oracle/product/19.3.0/dbhome_1/network/admin/sqlnet.ora] [SUCCESS] [None]

Copy of password file [DB1]
	[/u01/app/oracle/product/19.3.0/dbhome_1/dbs/orapwdb1] [SUCCESS] [None]

Database State
	Resetting the database's state: [SUCCESS] [None]

 - PostActions.upgPostActionsDriver 
2019-06-12 13:03:58.122 INFO Finished - PostActions.upgPostActionsDriver 
2019-06-12 13:03:58.122 INFO Finished - PostActions.runPostActions 
2019-06-12 13:03:58.122 INFO No postupgrade user action defined - AutoUpgPostActions.runPostActions 
[oracle@host02 postupgrade]$ 

 

Note the Guaranteed Restore Point which was created automatically by Autoupgrade
 


SQL> select GUARANTEE_FLASHBACK_DATABASE,NAME	,TIME,PRESERVED from v$restore_point;

GUA
---
NAME
--------------------------------------------------------------------------------
TIME									    PRE
--------------------------------------------------------------------------- ---
YES
AUTOUPGRADE_221145114461854_DB1
12-JUN-19 12.10.29.000000000 PM 					    YES


 

Oracle 19c SQL Quarantine

$
0
0

With Oracle Resource Manager, we had a way to limit and regulate use of resources like CPU and I/O as well as we had the ability to prevent the execution of any long running queries which exceeded a defined threshold.

So we could ‘cancel’ or terminate a SQL query which was running longer than a defined threshold of say 10 minutes.

All that was good – but nothing prevented that same query from being executed again and again running each time for 10 minutes and only then getting terminated – wasting 10 minutes of resources each time it was executed.

New in Oracle 19c is the new concept of SQL Quarantine where if a particular SQL statement exceeds the specified resource limit (set via Oracle Resource Manager), then the Resource Manager terminates the execution of that statement and “quarantines” the plan.

This broadly speaking means that the execution plan is now placed on a “blacklist” of plans that the database will not execute.

This SQL Quarantine feature in turn helps the performance as it prevents the future execution of a costly SQL statement which has now been quarantined.

However note: As we speak, this feature in only available on Oracle Engineered Systems (both on-premise as well as ExaCS) and to test this out I had to set this underscore parameter and bounce the database:

alter system set “_exadata_feature_on”=true scope=spfile;

Let us have a quick look at how this feature works.

We begin by creating a consumer group and resource plan and then adding a plan directive which will limit the run time of queries executed by the DEMO schema to 20 seconds. Note that the plan directive was initially set to 20 seconds of CPU time and not wall clock elapsed time and it was altered to specify the threshold based on elapsed time instead.

So this part of creating and configuring a database resource plan is pretty standard and no changes required here.
 

begin
   dbms_resource_manager.create_pending_area();
  dbms_resource_manager.create_consumer_group(
    CONSUMER_GROUP=>'GROUP_WITH_LIMITED_EXEC_TIME',
    COMMENT=>'This is the consumer group that has limited execution time per statement'
    );
  dbms_resource_manager.set_consumer_group_mapping(
    attribute => 'ORACLE_USER',
    value => 'DEMO',
    consumer_group =>'GROUP_WITH_LIMITED_EXEC_TIME'
  );
    dbms_resource_manager.create_plan(
    PLAN=> 'LIMIT_EXEC_TIME',
    COMMENT=>'Kill statement after exceeding total execution time'
  );
   dbms_resource_manager.create_plan_directive(
    PLAN=> 'LIMIT_EXEC_TIME',
    GROUP_OR_SUBPLAN=>'GROUP_WITH_LIMITED_EXEC_TIME',
    COMMENT=>'Kill statement after exceeding total execution time',
    SWITCH_GROUP=>'CANCEL_SQL',
    SWITCH_TIME=>30,
    SWITCH_ESTIMATE=>false
  );
dbms_resource_manager.create_plan_directive(
    PLAN=> 'LIMIT_EXEC_TIME',
    GROUP_OR_SUBPLAN=>'OTHER_GROUPS',
    COMMENT=>'leave others alone',
    CPU_P1=>100
  );
  DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA;
  DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA;
end;
/

begin
  dbms_resource_manager.create_pending_area();
  dbms_resource_manager_privs.grant_switch_consumer_group('DEMO','GROUP_WITH_LIMITED_EXEC_TIME',false);
  dbms_resource_manager.set_initial_consumer_group('DEMO','GROUP_WITH_LIMITED_EXEC_TIME');
  DBMS_RESOURCE_MANAGER.VALIDATE_PENDING_AREA;
  DBMS_RESOURCE_MANAGER.SUBMIT_PENDING_AREA;
end;
/


BEGIN
   dbms_resource_manager.update_plan_directive(plan=>'LIMIT_EXEC_TIME',
   group_or_subplan=>'GROUP_WITH_LIMITED_EXEC_TIME',new_switch_elapsed_time=>20, new_switch_for_call=>TRUE,new_switch_group=>'CANCEL_SQL' );
   dbms_resource_manager.validate_pending_area();
   dbms_resource_manager.submit_pending_area;
END;
/

 

Let us test the resource plan.

We connect as the DEMO user and issue a query which will exceed the elapsed time threshold of 20 seconds defined in the resource plan.

We will see an error message and the running query will be terminated.
 

ERROR:
ORA-56735: elapsed time limit exceeded - call aborted

300 rows selected.

Elapsed: 00:00:19.64
SQL> 

 

We use the DBMS_SQLQ package to create a quarantine configuration for an execution plan of a SQL statement which needs to be quarantined. We can create a quarantine configuration either by specifying the SQL text or the SQL_ID of the statement to be quarantined – CREATE_QUARANTINE_BY_SQL_ID or CREATE_QUARANTINE_BY_SQL_TEXT.
 

DECLARE
  quarantine_config VARCHAR2(30);
BEGIN
  quarantine_config := DBMS_SQLQ.CREATE_QUARANTINE_BY_SQL_ID(SQL_ID => '491fa2p6qt9h6');
END;
/

 
After the quarantine configuration has been created for an execution plan for a SQL statement, we then specify quarantine thresholds using the DBMS_SQLQ.ALTER_QUARANTINE procedure.

When any of the thresholds defined by Resource Manager is equal to or less than a quarantine threshold specified in the SQL quarantine configuration, then the SQL statement is not allowed to run provided it uses the same execution plan specified in the quarantine configuration.

Note: the quarantine name can be obtained from DBA_SQL_QUARANTINE dictionary table.

 

BEGIN
  DBMS_SQLQ.ALTER_QUARANTINE(
   QUARANTINE_NAME  =>  'SQL_QUARANTINE_ca0z7uh2sqcbw',
   PARAMETER_NAME   =>  'ELAPSED_TIME',
   PARAMETER_VALUE  =>  '30');
END;
/

 

With the SQL Quarantine now in place, when we try and issue the same SQL statement (which earlier used to run for 20 seconds before being terminated by Resource Manager) it no longer gets executed and does not even start and we see the message stating that the plan being used for this statement is part of a quarantined plan.
 

SQL> set timing on
SQL> select * from demo.myobjects where owner='SYS';
select * from demo.myobjects where owner='SYS'
                   *
ERROR at line 1:
ORA-56955: quarantined plan used

The V$SQL view has two additional columns which show the name of the quarantine and how many executions of the SQL statement has been avoided by the quarantine now in place.
 

SQL> select sql_quarantine,avoided_executions
  2  from v$sql where sql_id='491fa2p6qt9h6';

SQL_QUARANTINE
--------------------------------------------------------------------------------
AVOIDED_EXECUTIONS
------------------
SQL_QUARANTINE_ca0z7uh2sqcbw
		 1

 
Using the DBMS_SQLQ package subprograms we can also enable or disable a quarantine configuration, delete a quarantine configuration and if required also transfer quarantine configurations from one database to another.
 

SQL> BEGIN
    DBMS_SQLQ.ALTER_QUARANTINE(
       QUARANTINE_NAME => 'SQL_QUARANTINE_ca0z7uh2sqcbw',
       PARAMETER_NAME  => 'ENABLED',
       PARAMETER_VALUE => 'NO');
END;
/ 

PL/SQL procedure successfully completed.

Note that now since the quarantine has been disabled, the query is not prevented from immediate execution but is cancelled once the Resource Manager plan directive related to elapsed time takes effect.
 

ERROR:
ORA-56735: elapsed time limit exceeded - call aborted

300 rows selected.

Elapsed: 00:00:19.64


Oracle 19c New Feature Real-Time Statistics

$
0
0

In data warehouse environment, we often have situations where tables are truncated and new data (often millions of rows) is loaded.But when reports are run against those tables with freshly loaded data unless we go and gather fresh statistics there is a possibility that the optimizer may choose a sub-optimal plan.

So to address this issue, Oracle Database 12c introduced online statistics gathering – but that was only for those tables where data was loaded via CREATE TABLE AS SELECT statements as well as direct-path inserts using the APPEND hint.

Oracle Database 19c introduces real-time statistics, which extends online statistics gathering to also include conventional DML statements.

Statistics are normally gathered by automatic statistics gather job which runs inside the database maintenance window – but that is just once a day.

But for volatile tables statistics can go stale between DBMS_STATS job executions, so the new Oracle 19c feature of real-time statistics can help the optimizer generate more optimal plans for such volatile tables.

Bulk load operations will gather all necessary statistics (pre Oracle 19c behavior)- however real-time statistics augment rather than replace traditional statistics.

We have a table called MYOBJECTS_19C which currently has 47974 rows.
 

SQL> select distinct object_type from myobjects_19c where owner='SYS';

OBJECT_TYPE
-----------------------
INDEX
CLUSTER
TABLE PARTITION
...
...
VIEW
JAVA RESOURCE

26 rows selected.

SQL> select * from table (dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	8rbtnvw2uw67f, child number 0
-------------------------------------
select distinct object_type from myobjects_19c where owner='SYS'

Plan hash value: 1625058500

------------------------------------------------------------------------------------
| Id  | Operation	   | Name	   | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |		   |	   |	   |   258 (100)|	   |
|   1 |  HASH UNIQUE	   |		   |	26 |   364 |   258   (1)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| MYOBJECTS_19C | 47974 |   655K|   257   (1)| 00:00:01 |
------------------------------------------------------------------------------------

 
We now insert some additional rows into the table – basically doubling the number of rows in the table.

In earlier versions there is now a possibility of sub-optimal plans being chosen by the optimizer as it is ‘not aware’ of the fact that some DML activity has happened on the table and now the number of rows in the table have increased two-fold.

But now in Oracle 19c, we can see that as part of the INSERT statement, an OPTIMIZER STATISTICS GATHERING operation was also performed.
 

SQL> insert into myobjects_19c
    select * from myobjects;

47974 rows created.

SQL> commit;

Commit complete.

SQL> SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(format=>'TYPICAL'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	ahudb149n8f2f, child number 0
-------------------------------------
insert into myobjects_19c select * from myobjects

Plan hash value: 3078646338

--------------------------------------------------------------------------------------------------
| Id  | Operation			 | Name 	 | Rows  | Bytes | Cost (%CPU)| Time	 |
--------------------------------------------------------------------------------------------------
|   0 | INSERT STATEMENT		 |		 |	 |	 |   273 (100)| 	 |
|   1 |  LOAD TABLE CONVENTIONAL	 | MYOBJECTS_19C |	 |	 |	      | 	 |
|   2 |   OPTIMIZER STATISTICS GATHERING |		 | 47974 |    11M|   273   (1)| 00:00:01 |
|   3 |    TABLE ACCESS FULL		 | MYOBJECTS	 | 47974 |    11M|   273   (1)| 00:00:01 |
--------------------------------------------------------------------------------------------------

Note
-----
   - dynamic statistics used: statistics for conventional DML

 
When a hard parse of the SQL statement occurs, we can see that that the optimizer has detected that additional rows have been added to the table,

This is also indicated in the Note section: dynamic statistics used: statistics for conventional DML
 

SQL> EXEC DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO;

PL/SQL procedure successfully completed.

SQL> alter system flush shared_pool;

System altered.

SQL> select distinct object_type from myobjects_19c where owner='SYS';

OBJECT_TYPE
-----------------------
INDEX
CLUSTER
TABLE PARTITION
...
...
VIEW
JAVA RESOURCE

26 rows selected.

SQL> select * from table (dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	8rbtnvw2uw67f, child number 0
-------------------------------------
select distinct object_type from myobjects_19c where owner='SYS'

Plan hash value: 1625058500

------------------------------------------------------------------------------------
| Id  | Operation	   | Name	   | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |		   |	   |	   |   624 (100)|	   |
|   1 |  HASH UNIQUE	   |		   |	26 |  2054 |   624   (1)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| MYOBJECTS_19C | 95948 |  7402K|   621   (1)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"='SYS')

Note
-----
   - dynamic statistics used: statistics for conventional DML

 
Real-time statistics are indicated by STATS_ON_CONVENTIONAL_DML in the NOTES column in the *_TAB_STATISTICS and *_TAB_COL_STATISTICS views.
 

SQL> SELECT NUM_ROWS, BLOCKS, NOTES 
FROM   USER_TAB_STATISTICS
WHERE  TABLE_NAME = 'MYOBJECTS_19C' ;

  NUM_ROWS     BLOCKS NOTES
---------- ---------- --------------------------------------------------
     47974	  938
     95948	 2284 STATS_ON_CONVENTIONAL_DML

Oracle 19c New Feature High-Frequency Statistics

$
0
0

The automatic optimizer statistics collection job which calls DBMS_STATS package runs in predefined maintenance windows and these maintenance windows are open once a day during which various jobs including the gathering of statistics is performed.

For volatile tables statistics can go stale between two consecutive executions of such automatic statistics collection jobs. The presence of stale statistics could potentially cause performance problems because the optimizer is choosing sub-optimal execution plans.

The new feature introduced in Oracle 19c called High-Frequency Automatic Optimizer Statistics Collection complements the standard automatic statistics collection job.

By default, the high-frequency statistics collection occurs every 15 minutes and as such there is less possibility of having stale statistics even for those tables where data is changing continuously.

The DBMS_STATS.SET_GLOBAL_PREFS procedure is used to enable and disable the high-frequency statistics gather task as well as change the execution interval (default 15 minutes) and the maximum run time (60 minutes).

Let us see an example of using this new Oracle 19c feature.

We can see that the statistics for the MYOBJECTS_19C table are stale and we now use the DBMS_STATS.SET_GLOBAL_PREFS procedure to enable the high-frequency statistics gathering at 5 minute intervals.
 


SQL> select stale_stats from user_tab_statistics where table_name='MYOBJECTS_19C';

STALE_S
-------
YES

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_STATUS','ON');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS('AUTO_TASK_INTERVAL','300');

PL/SQL procedure successfully completed.

 
We can query the DBA_AUTO_STAT_EXECUTIONS data dictionary table to get information on the status of the daily standard automatic statistics execution job.We can see that during the week days the job runs during the maintenance window which is in the night and the weekend maintenance window is during the day instead.
 

SQL> SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,
       TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,
       TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG
FROM  DBA_AUTO_STAT_EXECUTIONS
ORDER BY OPID; 

 OPID ORIGIN		   STATUS      BEGIN_TIME     END_TIME	     COMPLETED FAILED TIMEOUT INPROG
----- -------------------- ----------- -------------- -------------- --------- ------ ------- ------
  659 AUTO_TASK 	   COMPLETED   10/06 23:00:50 10/06 23:02:02	   569	    2	    0	   0
  681 AUTO_TASK 	   COMPLETED   11/06 00:10:58 11/06 00:11:20	   296	    2	    0	   0
  684 AUTO_TASK 	   COMPLETED   11/06 00:20:59 11/06 00:21:11	    62	    2	    0	   0
  687 AUTO_TASK 	   COMPLETED   11/06 00:31:00 11/06 00:31:04	    43	    2	    0	   0
  690 AUTO_TASK 	   COMPLETED   11/06 00:41:01 11/06 00:41:05	    46	    2	    0	   0
  693 AUTO_TASK 	   COMPLETED   11/06 00:51:02 11/06 00:51:05	    44	    2	    0	   0
  699 AUTO_TASK 	   COMPLETED   11/06 01:01:04 11/06 01:01:12	   148	    2	    0	   0
  702 AUTO_TASK 	   COMPLETED   11/06 01:11:05 11/06 01:11:08	    43	    2	    0	   0
  705 AUTO_TASK 	   COMPLETED   11/06 01:21:06 11/06 01:21:08	    31	    2	    0	   0
  708 AUTO_TASK 	   COMPLETED   11/06 01:31:07 11/06 01:31:10	    39	    2	    0	   0
  711 AUTO_TASK 	   COMPLETED   11/06 01:41:09 11/06 01:41:12	    39	    2	    0	   0
 1045 AUTO_TASK 	   COMPLETED   12/06 22:00:09 12/06 22:02:47	   644	    1	    0	   0
 1085 AUTO_TASK 	   COMPLETED   13/06 22:00:03 13/06 22:02:09	   467	    1	    0	   0
 1125 AUTO_TASK 	   COMPLETED   15/06 08:23:50 15/06 08:25:46	   362	    1	    0	   0

14 rows selected.

 
After about 5 minutes have elapsed if we run the same query again, we can another ‘AUTO_TASK’ statistics job running and this is the high-frequency statistics gathering job.

We can also see that the table which earlier had statistics reported as stale has now had fresh statistics gathered.
 

SQL> SELECT OPID, ORIGIN, STATUS, TO_CHAR(START_TIME, 'DD/MM HH24:MI:SS' ) AS BEGIN_TIME,
       TO_CHAR(END_TIME, 'DD/MM HH24:MI:SS') AS END_TIME, COMPLETED, FAILED,
       TIMED_OUT AS TIMEOUT, IN_PROGRESS AS INPROG
FROM  DBA_AUTO_STAT_EXECUTIONS
ORDER BY OPID; 

 OPID ORIGIN		   STATUS      BEGIN_TIME     END_TIME	     COMPLETED FAILED TIMEOUT INPROG
----- -------------------- ----------- -------------- -------------- --------- ------ ------- ------
  659 AUTO_TASK 	   COMPLETED   10/06 23:00:50 10/06 23:02:02	   569	    2	    0	   0
  681 AUTO_TASK 	   COMPLETED   11/06 00:10:58 11/06 00:11:20	   296	    2	    0	   0
  684 AUTO_TASK 	   COMPLETED   11/06 00:20:59 11/06 00:21:11	    62	    2	    0	   0
  687 AUTO_TASK 	   COMPLETED   11/06 00:31:00 11/06 00:31:04	    43	    2	    0	   0
  690 AUTO_TASK 	   COMPLETED   11/06 00:41:01 11/06 00:41:05	    46	    2	    0	   0
  693 AUTO_TASK 	   COMPLETED   11/06 00:51:02 11/06 00:51:05	    44	    2	    0	   0
  699 AUTO_TASK 	   COMPLETED   11/06 01:01:04 11/06 01:01:12	   148	    2	    0	   0
  702 AUTO_TASK 	   COMPLETED   11/06 01:11:05 11/06 01:11:08	    43	    2	    0	   0
  705 AUTO_TASK 	   COMPLETED   11/06 01:21:06 11/06 01:21:08	    31	    2	    0	   0
  708 AUTO_TASK 	   COMPLETED   11/06 01:31:07 11/06 01:31:10	    39	    2	    0	   0
  711 AUTO_TASK 	   COMPLETED   11/06 01:41:09 11/06 01:41:12	    39	    2	    0	   0
 1045 AUTO_TASK 	   COMPLETED   12/06 22:00:09 12/06 22:02:47	   644	    1	    0	   0
 1085 AUTO_TASK 	   COMPLETED   13/06 22:00:03 13/06 22:02:09	   467	    1	    0	   0
 1125 AUTO_TASK 	   COMPLETED   15/06 08:23:50 15/06 08:25:46	   362	    1	    0	   0
 1287 AUTO_TASK 	   IN PROGRESS 15/06 17:38:25 15/06 17:38:25	    83	    0	    0	   1

15 rows selected.

SQL> 
SQL> select stale_stats from user_tab_statistics where table_name='MYOBJECTS_19C';

STALE_S
-------
NO

Oracle 19c New Feature Hint Usage Report

$
0
0

In earlier releases no error was reported if an incorrect hint was used or if there was any syntax error in the hint usage. Tuning a sub-optimal execution plan became difficult and sometimes we are left wondering why a full table scan is still occurring when an INDEX hint has been specified!

The database did not record or issue any error messages for hints that it ignores.

But now a new feature in Oracle 19c is the Hint Usage Report feature and this is enabled by default when using any DBMS_XPLAN functions like DISPLAY, DISPLAY_CURSOR, DISPLAY_WORKLOAD_REPOSITORY or DISPLAY_SQL_PLAN_BASELINE.

Let us look at a worked example of Hint Usage Report in Oracle 19c.

In this case we specify the correct INDEX hint, but the hint refers to an index which does not exist on the table – we have made a typo (MYOBJECT_IND instead of MYOBJECTS_IND).

In earlier releases the hint typo would have gone undetected, but now the Hint Usage Report clearly indicates why the hint was not used.
 

SQL> select /*+ INDEX (MYOBJECTS_19C,MYOBJECT_IND) */ distinct object_type from myobjects_19c where owner='SYS';

OBJECT_TYPE
-----------------------
INDEX
CLUSTER
TABLE PARTITION
SYNONYM

...
...

TABLE
VIEW
JAVA RESOURCE

26 rows selected.

SQL> select * from table (dbms_xplan.display_cursor (format=>'HINT_REPORT'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	8xcj1mg0ht48d, child number 0
-------------------------------------
select /*+ INDEX (MYOBJECTS_19C,MYOBJECT_IND) */ distinct object_type
from myobjects_19c where owner='SYS'

Plan hash value: 1625058500

------------------------------------------------------------------------------------
| Id  | Operation	   | Name	   | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |		   |	   |	   |   624 (100)|	   |
|   1 |  HASH UNIQUE	   |		   |	26 |  2054 |   624   (1)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| MYOBJECTS_19C | 95948 |  7402K|   621   (1)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"='SYS')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   2 -	SEL$1 / MYOBJECTS_19C@SEL$1
	 U -  INDEX (MYOBJECTS_19C,MYOBJECT_IND) / index specified in the hint doesn't exist

 
We now correct the statement and the hint usage report indicates that this time the hint has been used.
 

SQL>  select /*+ INDEX (MYOBJECTS_19C,MYOBJECTS_IND) */ distinct object_type
   from myobjects_19c where owner='SYS';

OBJECT_TYPE
-----------------------
INDEX
CLUSTER
TABLE PARTITION
SYNONYM

...
...

TABLE
VIEW
JAVA RESOURCE

26 rows selected.

SQL> select * from table (dbms_xplan.display_cursor (format=>'HINT_REPORT'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	8rmk7qfazvuju, child number 0
-------------------------------------
 select /*+ INDEX (MYOBJECTS_19C,MYOBJECTS_IND) */ distinct object_type
from myobjects_19c where owner='SYS'

Plan hash value: 3518837258

------------------------------------------------------------------------------------------------------
| Id  | Operation			     | Name	     | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT		     |		     |	     |	     |	1019 (100)|	     |
|   1 |  HASH UNIQUE			     |		     |	  26 |	2054 |	1019   (1)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID BATCHED| MYOBJECTS_19C | 95948 |	7402K|	1016   (1)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN		     | MYOBJECTS_IND | 95948 |	     |	 101   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("OWNER"='SYS')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1
---------------------------------------------------------------------------

   2 -	SEL$1 / MYOBJECTS_19C@SEL$1
	   -  INDEX (MYOBJECTS_19C,MYOBJECTS_IND)

 
Here in an example of incorrect usage of the USE_NL hint which has been detected as a syntax error.
 

SQL> select /*+ USE_NL */ distinct object_type
    from myobjects_19c where owner='SYS';

OBJECT_TYPE
-----------------------
INDEX
CLUSTER
TABLE PARTITION
SYNONYM

...
...

TABLE
VIEW
JAVA RESOURCE

26 rows selected.

SQL> select * from table (dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
SQL_ID	35s4c53z8x1g8, child number 0
-------------------------------------
select /*+ USE_NL */ distinct object_type from myobjects_19c where
owner='SYS'

Plan hash value: 1625058500

------------------------------------------------------------------------------------
| Id  | Operation	   | Name	   | Rows  | Bytes | Cost (%CPU)| Time	   |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |		   |	   |	   |   624 (100)|	   |
|   1 |  HASH UNIQUE	   |		   |	26 |  2054 |   624   (1)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL| MYOBJECTS_19C | 95948 |  7402K|   621   (1)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter("OWNER"='SYS')

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (E - Syntax error (1))
---------------------------------------------------------------------------

   1 -	SEL$1
	 E -  USE_NL

Oracle 19c Grid Infrastructure Upgrade

$
0
0

This note describes the process used to upgrade a two-node Oracle 12c Release 2 Grid Infrastructure environment to Oracle 19c on Linux OEL7.

The upgrade from 12c to 19c is performed in a rolling fashion using batches which will limit the application downtime.

Create the directory structure for Oracle 19c Grid Infrastructure and unzip the software
 

[grid@rac01 bin]$ cd /u02/app

[grid@rac01 app]$ mkdir 19.3.0

[grid@rac01 app]$ cd 19.3.0/

[grid@rac01 19.3.0]$ mkdir grid

[grid@rac01 19.3.0]$ cd grid 

[grid@rac01 grid]$ unzip -q /media/sf_software/LINUX.X64_193000_grid_home.zip

 
Install the packages kmod and kmod-libs
 

[root@rac01 etc]# yum install kmod
[root@rac01 etc]# yum install kmod-libs

 
Check current Oracle Clusterware installation readiness for upgrades using Cluster Verification Utility (CVU)
 
From the 19c Grid Infrastructure home execute:

./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/12.2.0/grid -dest_crshome /u02/app/19.3.0/grid -dest_version 19.0.0.0.0 -fixup -verbose

 
Update opatch version and apply patches 28553832 and 27006180
 

[root@rac01 grid]# /u01/app/12.2.0/grid/OPatch/opatch version
OPatch Version: 12.2.0.1.17
OPatch succeeded.


[root@rac01 grid]# cd /media/sf_software/p27006180_122010_Linux-x86-64
[root@rac01 p27006180_122010_Linux-x86-64]# cd 27006180/
[root@rac01 27006180]# /u02/app/12.2.0/grid/OPatch/opatchauto apply 

OPatchauto session is initiated at Sun May 26 22:05:38 2019

System initialization log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-26_10-05-42PM.log.

Session log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-26_10-06-00PM.log
The id for this session is GLUN

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.2.0/grid
Patch applicability verified successfully on home /u02/app/12.2.0/grid


Bringing down CRS service on home /u02/app/12.2.0/grid
Prepatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_10-06-40PM.log
CRS service brought down successfully on home /u02/app/12.2.0/grid


Start applying binary patch on home /u02/app/12.2.0/grid
Binary patch applied successfully on home /u02/app/12.2.0/grid


Starting CRS service on home /u02/app/12.2.0/grid
Postpatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_10-11-48PM.log
CRS service started successfully on home /u02/app/12.2.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac01
CRS Home:/u02/app/12.2.0/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /media/sf_software/p27006180_122010_Linux-x86-64/27006180/27006180
Log: /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-26_22-09-14PM_1.log



OPatchauto session completed at Sun May 26 22:17:35 2019
Time taken to complete the session 11 minutes, 58 seconds
[root@rac01 27006180]# 



[root@rac01 28553832]# cd 28553832/
[root@rac01 28553832]# /u02/app/12.2.0/grid/OPatch/opatchauto apply

OPatchauto session is initiated at Sun May 26 23:11:04 2019

System initialization log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchautodb/systemconfig2019-05-26_11-11-07PM.log.

Session log file is /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/opatchauto2019-05-26_11-11-24PM.log
The id for this session is QQTG

Executing OPatch prereq operations to verify patch applicability on home /u02/app/12.2.0/grid
Patch applicability verified successfully on home /u02/app/12.2.0/grid


Bringing down CRS service on home /u02/app/12.2.0/grid
Prepatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_11-11-46PM.log
CRS service brought down successfully on home /u02/app/12.2.0/grid


Start applying binary patch on home /u02/app/12.2.0/grid
Binary patch applied successfully on home /u02/app/12.2.0/grid


Starting CRS service on home /u02/app/12.2.0/grid
Postpatch operation log file location: /u02/app/grid/crsdata/rac01/crsconfig/crspatch_rac01_2019-05-26_11-13-24PM.log
CRS service started successfully on home /u02/app/12.2.0/grid

OPatchAuto successful.

--------------------------------Summary--------------------------------

Patching is completed successfully. Please find the summary as follows:

Host:rac01
CRS Home:/u02/app/12.2.0/grid
Version:12.2.0.1.0
Summary:

==Following patches were SUCCESSFULLY applied:

Patch: /u01/app/28553832/28553832
Log: /u02/app/12.2.0/grid/cfgtoollogs/opatchauto/core/opatch/opatch2019-05-26_23-12-42PM_1.log

OPatchauto session completed at Sun May 26 23:18:51 2019
Time taken to complete the session 7 minutes, 48 seconds
[root@rac01 28553832]# 

 
 

Start the 19c Grid Infrastructure rolling upgrade
 
[grid@rac01 grid]$ cd /u02/app/19.3.0/grid

[grid@rac01 grid]$ ./gridSetup.sh
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 
We select different batches here because between when we move from Batch 1 to Batch 2, we can move services from the node currently still running the previous release to the upgraded node, so that services are not affected by the upgrade process.
 
 

 
 

 
 

 
 

 
 

 
 

 
 

 
 
We can see that when the root.sh is being run on node rac01, cluster services are still up and running on node rac02.

Upgrade of cluster services on rac02 will be performed as part of Batch 2.
 

[root@rac02 bin]# ./crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

[root@rac01 ~]# cd /u01/app/12.2.0/grid/bin
[root@rac01 bin]# ./crsctl check crs 
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
[root@rac01 bin]# 
 


 
 
We can see that the cluster is now in ROLLING UPGRADE mode.
 

[root@rac02 bin]# ./crsctl query crs softwareversion -all
Oracle Clusterware version on node [rac01] is [19.0.0.0.0]
Oracle Clusterware version on node [rac02] is [12.2.0.1.0]

[root@rac02 bin]# ./crsctl query crs activeversion -f 
Oracle Clusterware active version on the cluster is [12.2.0.1.0]. The cluster upgrade state is [ROLLING UPGRADE]. The cluster active patch level is [695302969].

 

 
 

 
 

 
 
Upgrade is now completed!
 

[root@rac02 bin]# ./crsctl query crs softwareversion -all
Oracle Clusterware version on node [rac01] is [19.0.0.0.0]
Oracle Clusterware version on node [rac02] is [19.0.0.0.0]

[root@rac02 bin]# ./crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [19.0.0.0.0]

 

DBCA New Features Oracle 19c

$
0
0

In Oracle 19c a number of new features have been added to DBCA in silent mode.

In Oracle 18c, we could clone a PDB both via the GUI DBCA as well as using the dbca -silent -createPluggableDatabase command.

New in Oracle 19c is the ability to create a remote clone of a PDB as well as perform the relocate of a PDB using the DBCA silent mode.

We can use the createFromRemotePDB parameter of the DBCA command createPluggableDatabase to create a PDB by cloning a remote PDB as well as use the relocatePDB option to relocate a PDB from one container database to another.

Let us have a look at a worked example.

Here is the 19c environment:

a) host01: CDB1 (PDB1,PDB2)
b) host02: CDB2 (PDB2)

We will remote clone PDB1 from CDB1 and create the pluggable database in the container database CDB2.

We will then create another pluggable database PDB3 in CDB1 by cloning PDB1 using the dbca silent method.

Then we will relocate PDB3 from CDB1 to CDB2.

For this we need to create a common user in CDB1 and use this use to create a database link from CDB2 to CDB1.
 

***Create common user in CDB1***

SQL> create user c##link_user identified by oracle container=all;

User created.

SQL> grant sysoper,sysdba to c##link_user container=all;

Grant succeeded.

SQL> grant create pluggable database to c##link_user container=all;

Grant succeeded.

SQL> grant create session to c##link_user container=all;

Grant succeeded.


***Add TNS entry to connect to CDB1 from CDB2***

CDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = host02.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdb1.localdomain)
    )
  )

***Create database link from CDB2 to CDB1***

SQL> create database link cdb1_link connect to c##link_user identified by oracle
    using 'cdb1';

Database link created.


SQL>  select * from dual@cdb1_link;

D
-
X


***Note the PDB's currently in CDB2***

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB2 			  READ WRITE NO

 

Remote Pluggable Clone

 
Create the Pluggable Database using the dbca -silent -createPluggableDatabase command with the -createFromRemotePDB clause.
 

[oracle@host02 ~]$ dbca -silent -createPluggableDatabase -pdbname pdb1 -sourceDB cdb2 -createFromRemotePDB -remotePDBName pdb1 -dbLinkUsername c##link_user -remoteDBConnString "host01:1521/cdb1.localdomain" -remoteDBSYSDBAUserName SYS -dbLinkUserPassword oracle -remoteDBSYSDBAUserPassword welcome1 -sysDBAUserName SYS -sysDBAPassword welcome1 -pdbDatafileDestination '/u01/app/oracle/oradata/CDB2/pdb1/' 

Prepare for db operation
50% complete
Create pluggable database using remote clone operation
100% complete
Pluggable database "pdb1" plugged successfully.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb2/pdb1/cdb2.log" for further details.


SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB2 			  READ WRITE NO
	 4 PDB1 			  READ WRITE NO

 

Relocate Pluggable Database

 

First we create PDB3 in CDB1 by cloning from PDB1 using the dbca -silent -createPluggableDatabase command.
 

[oracle@host01 ~]$ dbca -silent -createPluggableDatabase -sourceDB cdb1 -pdbName pdb3 -createAsClone FALSE -createPDBFrom PDB -sourcePDB pdb1 -fileNameConvert '/u01/app/oracle/oradata/CDB1/pdb1/','/u01/app/oracle/oradata/CDB1/pdb3/'
Prepare for db operation
13% complete
Creating Pluggable Database
15% complete
19% complete
23% complete
31% complete
53% complete
Completing Pluggable Database Creation
60% complete
Executing Post Configuration Actions
100% complete
Pluggable database "pdb3" plugged successfully.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb1/pdb3/cdb1.log" for further details.

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
	 4 PDB2 			  READ WRITE NO
 	 5 PDB3				  READ WRITE NO

 
Relocate PDB3 from CDB1 to CDB2 using the dbca -silent -relocatePDB command.
 

[oracle@host02 ~]$ dbca -silent  -relocatePDB -pdbname pdb3 -sourceDB cdb2 -remotePDBName pdb3 -dbLinkUsername c##link_user -remoteDBConnString "host02:1521/cdb1.localdomain" -remoteDBSYSDBAUserName SYS -dbLinkUserPassword oracle -remoteDBSYSDBAUserPassword G#vin2407 -sysDBAUserName SYS -sysDBAPassword G#vin2407 -pdbDatafileDestination '/u01/app/oracle/oradata/CDB2/pdb3/' 
Prepare for db operation
50% complete
Create pluggable database using relocate PDB operation
100% complete
Pluggable database "pdb3" plugged successfully.
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/cdb2/pdb3/cdb2.log" for further details.


****Note the PDB's now in CDB2****

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB2 			  READ WRITE NO
	 4 PDB1 			  READ WRITE NO
	 5 PDB3 			  READ WRITE NO

****Note the PDB's now in CDB1****

SQL> show pdbs

    CON_ID CON_NAME			  OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
	 2 PDB$SEED			  READ ONLY  NO
	 3 PDB1 			  READ WRITE NO
	 4 PDB2 			  READ WRITE NO

SQL*Plus New Features in Oracle 12c Release 2 and 18c

$
0
0

A number of new features were added in Oracle 12c Release 2 and Oracle 18c related to SQL*Plus.

Let us have a quick look at some of these new features and how they can be used.
 

Oracle 18c
  • SET FEEDBACK ON SQL_ID: Display the sql_id for the currently executed SQL statement
  • SET ROWLIMIT n: Set a limit for the number of rows displayed for a query
  • SET LINESIZE with WINDOW option: Dynamically change and format the displayed output to fit the screen or window size
Oracle 12c Release 2
  • HISTORY: Display and run previously executed SQL and PL/SQL commands
  • SET MARKUP CSV: Option to output data in CSV format with choice of delimiter
  • SET FEEDBACK ONLY: Option to only display the number of rows selected and no data is displayed
  • sqlplus –F or -fast: Changes the default values of settings like ARRAYSIZE,LOBPREFETCH,PAGESIZE,ROWPREFETCH to improve performance

 

SQL> set feedback only

SQL> select * from hr.employees;

107 rows selected.


SQL> set feedback on sql_id

SQL> select * from hr.employees where first_name='Susan';

EMPLOYEE_ID FIRST_NAME		 LAST_NAME
----------- -------------------- -------------------------
EMAIL			  PHONE_NUMBER	       HIRE_DATE JOB_ID 	SALARY
------------------------- -------------------- --------- ---------- ----------
COMMISSION_PCT MANAGER_ID DEPARTMENT_ID
-------------- ---------- -------------
	203 Susan		 Mavris
SMAVRIS 		  515.123.7777	       07-JUN-02 HR_REP 	  6500
		      101	     40


1 row selected.

SQL_ID: gw7ra2jba93p6

SQL> set rowlimit 5

SQL> select first_name from hr.employees;

FIRST_NAME
--------------------
Ellen
Sundar
Mozhe
David
Hermann

5 rows selected. (rowlimit reached)

 

SET LINESIZE WINDOW – As the window is being made bigger with every execution of the same query, we can the linesize automatically changing and more columns appearing on the same line
 

 
 

 

SQL> SHOW ARRAYSIZE LOBPREFETCH PAGESIZE ROWPREFETCH STATEMENTCACHE
arraysize 15
lobprefetch 0
pagesize 14
rowprefetch 1
statementcache is 0

SQL> quit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

[oracle@linux01 bin]$ sqlplus -fast apex_owner/oracle

SQL*Plus: Release 12.2.0.1.0 Production on Wed Jul 12 22:47:00 2017

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> SHOW ARRAYSIZE LOBPREFETCH PAGESIZE ROWPREFETCH STATEMENTCACHE
arraysize 100
lobprefetch 16384
pagesize 50000
rowprefetch 2
statementcache is 20

 

SQL> set markup csv on 

SQL> select * from hr.regions;

"REGION_ID","REGION_NAME"
1,"Europe"
2,"Americas"
3,"Asia"
4,"Middle East and Africa"

SQL> set markup csv on quote off delimiter |
SQL> /

REGION_ID|REGION_NAME
1|Europe
2|Americas
3|Asia
4|Middle East and Africa

 

SQL> show history
history is OFF

SQL> set history on 

SQL> select count(*) from exp_detail;

  COUNT(*)
----------
       450

SQL> select count(*) from months;

  COUNT(*)
----------
	 6

SQL> select sysdate from dual;

SYSDATE
---------
12-JUL-17

SQL> history
  1  select count(*) from exp_detail;
  2  select count(*) from months;
  3  select sysdate from dual;


SQL> history 2 run

  COUNT(*)
----------
	 6

SQL> history 3 run 

SYSDATE
---------
12-JUL-17

SQL> history clear  

SQL> history
SP2-1651: History list is empty.

SQLcl – SQL Command Line Interface Top Features

$
0
0

SQLcl or also known as SQL Command Interface was earlier available as a stand alone utility which we could download – it is now bundled in the Oracle 18c and Oracle 19c software (as well as 12c Release 2).

Think of SQLcl as a feature-rich combination of SQL*Plus and SQL Developer – all the helpful elements and cool utilities of the GUI available in a command line interface.

In-line editing, automatic SQL output formating, reuse commands and custom scripts with the ALIAS and REPEAT command, INFORMATION and INFO+ – these are just a few of the cool SQLcl features which will make you stop using SQL*Plus!

Let’s have a look at some SQLcl features (the HELP command provides us a lot of information about a command with examples of how to use them).
 
Note: we can launch SQLcl via the sql executable located in the $ORACLE_HOME/sqldeveloper directory.
 

[oracle@host02 admin]$ cd /u01/app/oracle/product/19.3.0/dbhome_1/sqldeveloper/sqldeveloper/bin

[oracle@host02 bin]$ ./sql hr/hr@localhost:1521/pdb1.localdomain

SQLcl: Release 19.1 Production on Tue Jun 25 15:13:24 2019

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

 
SHOW TNS
 

SQL> show tns 
TNS Lookup locations
--------------------
1.  USER Home dir
    /home/oracle
2.  ORACLE_HOME
    /u01/app/oracle/product/19.3.0/dbhome_1/network/admin

Location used:
-------------
	/u01/app/oracle/product/19.3.0/dbhome_1/network/admin

Available TNS Entries
---------------------
CDB1
LISTENER_CDB1
LISTENER_SH1
PDB1
SH1

 
INFO
 

SQL> info employees
TABLE: EMPLOYEES 
	 LAST ANALYZED:2019-06-25 14:30:58.0 
	 ROWS         :107 
	 SAMPLE SIZE  :107 
	 INMEMORY     :DISABLED 
	 COMMENTS     :employees table. Contains 107 rows. References with departments,
                       jobs, job_history tables. Contains a self reference. 

Columns 
NAME             DATA TYPE           NULL  DEFAULT    COMMENTS
*EMPLOYEE_ID     NUMBER(6,0)         No               Primary key of employees table.
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes              First name of the employee. A not null column.
 LAST_NAME       VARCHAR2(25 BYTE)   No               Last name of the employee. A not null column.
 EMAIL           VARCHAR2(25 BYTE)   No               Email id of the employee
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes              Phone number of the employee; includes country
                                                      code and area code
 HIRE_DATE       DATE                No               Date when the employee started on this job. A not
                                                      null column.
 JOB_ID          VARCHAR2(10 BYTE)   No               Current job of the employee; foreign key to job_id
                                                      column of thejobs table. A not null column.
 SALARY          NUMBER(8,2)         Yes              Monthly salary of the employee. Must be
                                                      greaterthan zero (enforced by constraint
                                                      emp_salary_min)
 COMMISSION_PCT  NUMBER(2,2)         Yes              Commission percentage of the employee; Only
                                                      employees in salesdepartment elgible for
                                                      commission percentage
 MANAGER_ID      NUMBER(6,0)         Yes              Manager id of the employee; has same domain as
                                                      manager_id indepartments table. Foreign key to
                                                      employee_id column of employees table.(useful for
                                                      reflexive joins and CONNECT BY query)
 DEPARTMENT_ID   NUMBER(4,0)         Yes              Department id where employee works; foreign key to
                                                      department_idcolumn of the departments table

Indexes
INDEX_NAME             UNIQUENESS   STATUS   FUNCIDX_STATUS   COLUMNS                 
HR.EMP_JOB_IX          NONUNIQUE    VALID                     JOB_ID                  
HR.EMP_NAME_IX         NONUNIQUE    VALID                     LAST_NAME, FIRST_NAME   
HR.EMP_EMAIL_UK        UNIQUE       VALID                     EMAIL                   
HR.EMP_EMP_ID_PK       UNIQUE       VALID                     EMPLOYEE_ID             
HR.EMP_MANAGER_IX      NONUNIQUE    VALID                     MANAGER_ID              
HR.EMP_DEPARTMENT_IX   NONUNIQUE    VALID                     DEPARTMENT_ID           


References
TABLE_NAME    CONSTRAINT_NAME   DELETE_RULE   STATUS    DEFERRABLE       VALIDATED   GENERATED   
DEPARTMENTS   DEPT_MGR_FK       NO ACTION     ENABLED   NOT DEFERRABLE   VALIDATED   USER NAME   
EMPLOYEES     EMP_MANAGER_FK    NO ACTION     ENABLED   NOT DEFERRABLE   VALIDATED   USER NAME   
JOB_HISTORY   JHIST_EMP_FK      NO ACTION     ENABLED   NOT DEFERRABLE   VALIDATED   USER NAME   

 

INFO+ (note additional details like Histograms etc)
 

SQL> info+ hr.employees
TABLE: EMPLOYEES 
	 LAST ANALYZED:2019-06-25 14:30:58.0 
	 ROWS         :107 
	 SAMPLE SIZE  :107 
	 INMEMORY     :DISABLED 
	 COMMENTS     :employees table. Contains 107 rows. References with departments,
                       jobs, job_history tables. Contains a self reference. 

Columns 
NAME             DATA TYPE           NULL  DEFAULT    LOW_VALUE             HIGH_VALUE            NUM_DISTINCT   HISTOGRAM  
*EMPLOYEE_ID     NUMBER(6,0)         No                   100                   206                   107            NONE       
 FIRST_NAME      VARCHAR2(20 BYTE)   Yes                  Adam                  Winston               91             FREQUENCY  
 LAST_NAME       VARCHAR2(25 BYTE)   No                   Abel                  Zlotkey               102            NONE       
 EMAIL           VARCHAR2(25 BYTE)   No                   ABANDA                WTAYLOR               107            NONE       
 PHONE_NUMBER    VARCHAR2(20 BYTE)   Yes                  011.44.1343.329268    650.509.4876          107            NONE       
 HIRE_DATE       DATE                No                   2001.01.13.00.00.00   2008.04.21.00.00.00   98             NONE       
 JOB_ID          VARCHAR2(10 BYTE)   No                   AC_ACCOUNT            ST_MAN                19             FREQUENCY  
 SALARY          NUMBER(8,2)         Yes                  2100                  24000                 58             NONE       
 COMMISSION_PCT  NUMBER(2,2)         Yes                  .1                    .4                    7              NONE       
 MANAGER_ID      NUMBER(6,0)         Yes                  100                   205                   18             FREQUENCY  
 DEPARTMENT_ID   NUMBER(4,0)         Yes                  10                    110                   11             FREQUENCY  

Indexes
INDEX_NAME             UNIQUENESS   STATUS   FUNCIDX_STATUS   COLUMNS                 
HR.EMP_JOB_IX          NONUNIQUE    VALID                     JOB_ID                  
HR.EMP_NAME_IX         NONUNIQUE    VALID                     LAST_NAME, FIRST_NAME   
HR.EMP_EMAIL_UK        UNIQUE       VALID                     EMAIL                   
HR.EMP_EMP_ID_PK       UNIQUE       VALID                     EMPLOYEE_ID             
HR.EMP_MANAGER_IX      NONUNIQUE    VALID                     MANAGER_ID              
HR.EMP_DEPARTMENT_IX   NONUNIQUE    VALID                     DEPARTMENT_ID           


References
TABLE_NAME    CONSTRAINT_NAME   DELETE_RULE   STATUS    DEFERRABLE       VALIDATED   GENERATED   
DEPARTMENTS   DEPT_MGR_FK       NO ACTION     ENABLED   NOT DEFERRABLE   VALIDATED   USER NAME   
EMPLOYEES     EMP_MANAGER_FK    NO ACTION     ENABLED   NOT DEFERRABLE   VALIDATED   USER NAME   
JOB_HISTORY   JHIST_EMP_FK      NO ACTION     ENABLED   NOT DEFERRABLE   VALIDATED   USER NAME   

 
DDL
 

SQL> ddl regions

  CREATE TABLE "HR"."REGIONS" 
   (	"REGION_ID" NUMBER CONSTRAINT "REGION_ID_NN" NOT NULL ENABLE, 
	"REGION_NAME" VARCHAR2(25)
   ) SEGMENT CREATION IMMEDIATE 
  PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 
 NOCOMPRESS LOGGING
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSAUX" ;
  CREATE UNIQUE INDEX "HR"."REG_ID_PK" ON "HR"."REGIONS" ("REGION_ID") 
  PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS 
  STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
  PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
  BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
  TABLESPACE "SYSAUX" ;
ALTER TABLE "HR"."REGIONS" ADD CONSTRAINT "REG_ID_PK" PRIMARY KEY ("REGION_ID")
  USING INDEX "HR"."REG_ID_PK"  ENABLE;

 
CD
 

SQL> cd /home/oracle

SQL> host ls -l
total 3420
-rwxr-x--- 1 oracle oinstall 3490686 Jun  9 16:34 autoupgrade.jar
drwxr-xr-x 2 oracle oinstall       6 Jun  1 22:48 Desktop
drwxr-xr-x 2 oracle oinstall       6 Jun  1 22:48 Documents
drwxr-xr-x 2 oracle oinstall       6 Jun  1 22:48 Downloads

ALIAS and REPEAT

SQL> alias active_users=select sid,serial#,username from v$session where status='ACTIVE' and username is not null;

SQL> active_users

       SID    SERIAL#	USERNAME                                                                        
-----------  --------  ------------
        13      59956   SYS
       111      27507   HR
                                                                             
SQL> repeat 3 5

Running 1 of 3  @ 7:10:16.793 with a delay of 5s

       SID    SERIAL#	USERNAME                                                                        
-----------  --------  ------------
        13      59956   SYS
       111      27507   HR
                  

Running 2 of 3  @ 7:10:21.837 with a delay of 5s

      SID    SERIAL#	USERNAME                                                                        
-----------  --------  ------------
        13      59956   SYS
       111      27507   HR                                   


Running 3 of 3  @ 7:10:26.865 with a delay of 5s

      SID    SERIAL#	USERNAME                                                                        
-----------  --------  ------------
        13      59956   SYS
       111      27507   HR

 
SET SQLFORMAT ANSICONSOLE
 


Oracle 18c New Feature Private Temporary Tables

$
0
0

Active Data Guard databases are now no longer just ‘read-only’ databases – they have now become ‘read-mostly’ databases which is primarily used for reporting purposes but also allows to some extent DML activity as well.

One of the new features in Oracle 18c is Private Temporary Tables.

Private temporary tables differ from Global Temporary tables in some ways. They are not stored on disk but only memory and are only visible to the session which creates them. The name of the table must be prefixed with the string ‘ORA$PTT

They are temporary database objects which are dropped either at the end of the transaction or end of the session. Different sessions of the same user can use the same name for the private temporary table.

These tables can be useful when the application which is predominantly read-only also has a requirement to perform some DML activity like inserting or updating some temporary data in transient tables that are then queried a few times and then dropped at the end of either a transaction or session.

Let us have a look at this feature.

We connect as user HR to the pluggable database PDB1 – but to the Active Standby read-only database.
 
Session 1 of user HR
 


SQL> CREATE PRIVATE TEMPORARY TABLE ORA$PTT_ADG_SESSIONS
  2   (username      varchar2(20), sid number , serial# number)
  3  ON COMMIT PRESERVE DEFINITION;

Table created.

SQL> insert into ORA$PTT_ADG_SESSIONS
 select s.username          i_username, to_char(s.sid)          i_sid, to_char(s.serial#)      i_serial
from 
v$session s, v$process p
where 
s.paddr = p.addr
and 
sid = (select sid from v$mystat where rownum = 1);  

1 row created.

SQL> select * from ORA$PTT_ADG_SESSIONS;

USERNAME		    SID    SERIAL#
-------------------- ---------- ----------
HR			    465      20742

 
Session 2 of user HR
 
Note that the table name is the same – but the data is different.
 

SQL> CREATE PRIVATE TEMPORARY TABLE ORA$PTT_ADG_SESSIONS
  2  (username      varchar2(20), sid number , serial# number)
  3  ON COMMIT PRESERVE DEFINITION;

Table created.

SQL> insert into ORA$PTT_ADG_SESSIONS
 select s.username          i_username, to_char(s.sid)          i_sid, to_char(s.serial#)      i_serial
from 
v$session s, v$process p
where 
s.paddr = p.addr
and 
sid = (select sid from v$mystat where rownum = 1);  2    3    4    5    6    7    8  

1 row created.

SQL>  select * from ORA$PTT_ADG_SESSIONS;

USERNAME		    SID    SERIAL#
-------------------- ---------- ----------
HR			    472      46742

SQL> 

 
Reconnect as user HR
 

SQL> conn hr/hr@pdb1
Connected.

SQL> select * from ORA$PTT_ADG_SESSIONS;
select * from ORA$PTT_ADG_SESSIONS
              *
ERROR at line 1:
ORA-00942: table or view does not exist

 
Private Temporary Table with ON COMMIT DROP DEFINITION
 

SQL>  CREATE PRIVATE TEMPORARY TABLE ORA$PTT_ADG_SESSIONS
  2   (username      varchar2(20), sid number , serial# number)
  3  ON COMMIT DROP DEFINITION;

Table created.

SQL>  insert into ORA$PTT_ADG_SESSIONS
 select s.username          i_username, to_char(s.sid)          i_sid, to_char(s.serial#)      i_serial
from 
v$session s, v$process p
where 
s.paddr = p.addr
and 
sid = (select sid from v$mystat where rownum = 1);   

1 row created.

SQL> select * from ORA$PTT_ADG_SESSIONS;

USERNAME		    SID    SERIAL#
-------------------- ---------- ----------
HR			    465      20742

SQL> update ORA$PTT_ADG_SESSIONS set USERNAME='NOBODY';

1 row updated.

SQL> select * from ORA$PTT_ADG_SESSIONS;

USERNAME		    SID    SERIAL#
-------------------- ---------- ----------
NOBODY			    465      20742

SQL> commit;

Commit complete.

SQL>  select * from ORA$PTT_ADG_SESSIONS;
 select * from ORA$PTT_ADG_SESSIONS
               *
ERROR at line 1:
ORA-00942: table or view does not exist

Oracle 19c New Feature Automatic Flashback of Standby Database

$
0
0

One of the new features in Oracle 19c is that either when a flashback or point-in-time recovery is performed on the primary database in an Oracle Data Guard configuration, the same operation is also performed on the standby database as well.

Following a flashback or PITR operation, the primary database is then opened with the RESETLOGS option.

The RESETLOGS leads to a new incarnation of the primary or the PDB in the primary.

What’s new then in Oracle 19c?

The MRP process on the standby detects the new incarnation and moves the standby database to the new ‘branch’ of redo and then flashes back the standby or the pluggable database on the standby to the same point in time as that of the primary or the PDB on the primary.

In earlier releases, we had to obtain the RESETLOGS SCN# on the primary and then manually issue a FLASHBACK DATABASE command on the standby database to enable managed recovery and continue with the redo apply process.

Further, another new Oracle 19c feature is that when we create a Restore Point on the primary database, it will automatically create a restore point as well on the standby database.

These restore points are called Replicated Restore Points and have the restore point name suffixed with a “_PRIMARY”.

Let us have a look at this feature in action.

On the primary database we will create a guaranteed restore point.
 

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL> create table hr.myobjects as select * from all_objects;

Table created.

SQL> select count(*) from hr.myobjects;

  COUNT(*)
----------
     71296


SQL> create restore point orcl_grp guarantee flashback database;

Restore point created.

SQL> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
ORCL_GRP

 
Note that on the standby database, the restore point has been automatically created and the name has the suffix _PRIMARY
 

SQL> select flashback_on from v$database;

FLASHBACK_ON
------------------
YES

SQL>  select count(*) from hr.myobjects;

  COUNT(*)
----------
     71296

SQL> select name from v$restore_point;

NAME
--------------------------------------------------------------------------------
ORCL_GRP_PRIMARY

 
On the primary database we can see that the REPLICATED column has the value NO for the restore point while on the standby database the value is YES
 

Primary 

SQL> select NAME,REPLICATED from v$restore_point;

NAME			       REP
------------------------------ ---
ORCL_GRP	       		NO


Standby

SQL> select NAME,REPLICATED from v$restore_point;

NAME			       REP
------------------------------ ---
ORCL_GRP_PRIMARY	       YES

 
We now simulate a case where a human error has been made and we now need to perform a flashback operation on the primary to resolve the human error.

Flashback is performed to the restore point created earlier and we then open the database with the RESETLOGS option.
 

SQL> truncate table hr.myobjects;

Table truncated.

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1241513488 bytes
Fixed Size		    8896016 bytes
Variable Size		  335544320 bytes
Database Buffers	  889192448 bytes
Redo Buffers		    7880704 bytes
Database mounted.

SQL> flashback database to restore point orcl_grp;

Flashback complete.

SQL> alter database open resetlogs;

Database altered.


 
The standby database is placed in MOUNT mode and we will see that the MRP process on the standby database will start and perform the automatic flashback operation on the standby database as well.
 

SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> startup mount;
ORACLE instance started.

Total System Global Area 1241513488 bytes
Fixed Size		    8896016 bytes
Variable Size		  318767104 bytes
Database Buffers	  905969664 bytes
Redo Buffers		    7880704 bytes
Database mounted.

 

...
...
 rfs (PID:27484): Primary database is in MAXIMUM PERFORMANCE mode
2019-07-01T23:00:30.671381+08:00
 rfs (PID:27484): Selected LNO:5 for T-1.S-3 dbid 1540291890 branch 1012517366
2019-07-01T23:00:34.853803+08:00
ARC0 (PID:27457): Archived Log entry 9 added for T-1.S-3 ID 0x5bcedc53 LAD:1
2019-07-01T23:00:34.930249+08:00
 rfs (PID:27500): Primary database is in MAXIMUM PERFORMANCE mode
2019-07-01T23:00:35.073538+08:00
 rfs (PID:27500): Selected LNO:5 for T-1.S-4 dbid 1540291890 branch 1012517366
2019-07-01T23:00:35.909833+08:00
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2019-07-01T23:00:35.910522+08:00
Attempt to start background Managed Standby Recovery process (orcl)
Starting background process MRP0
2019-07-01T23:00:35.932806+08:00
MRP0 started with pid=49, OS id=27505 
2019-07-01T23:00:35.935882+08:00
Background Managed Standby Recovery process started (orcl)
2019-07-01T23:00:40.940224+08:00
Serial Media Recovery started

...
...

MRP0 (PID:27505): Recovery coordinator performing automatic flashback of database to SCN:0x00000000001fcce5 (2084069)
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
2019-07-01T23:01:01.852650+08:00
Setting recovery target incarnation to 2
2019-07-01T23:01:01.852994+08:00
Serial Media Recovery started
stopping change tracking
2019-07-01T23:01:01.976765+08:00
Media Recovery Log /u01/app/oracle/fast_recovery_area/ORCL_SB/archivelog/2019_07_01/o1_mf_1_9_gkn5pkx9_.arc
2019-07-01T23:01:02.142326+08:00
Resize operation completed for file# 3, old size 522240K, new size 552960K
Restore point ORCL_GRP_PRIMARY propagated from primary already exists
2019-07-01T23:01:02.215320+08:00
Media Recovery Log /u01/app/oracle/fast_recovery_area/ORCL_SB/archivelog/2019_07_01/o1_mf_1_10_gkn5pnf4_.arc
2019-07-01T23:01:02.657960+08:00
Media Recovery Log /u01/app/oracle/fast_recovery_area/ORCL_SB/archivelog/2019_07_01/o1_mf_1_11_gkn6s6g1_.arc
2019-07-01T23:01:02.795226+08:00
Media Recovery Log /u01/app/oracle/fast_recovery_area/ORCL_SB/archivelog/2019_07_01/o1_mf_1_12_gkn7cxvl_.arc
2019-07-01T23:01:02.956167+08:00
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2019-07-01T23:01:04.624321+08:00
Incomplete Recovery applied until change 2084069 time 07/01/2019 22:47:05
Flashback Media Recovery Complete

 
When we see the message “Flashback Media Recovery Complete” in the standby database alert log, we can now open the standby database.

Note that the Data Guard Broker configuration is not showing any error and we did not have to perform any manual steps on the standby database to enable the configuration following the flashback of the primary database.
 

SQL> ALTER DATABASE OPEN;

Database altered.

DGMGRL> show configuration;

Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Members:
  orcl    - Primary database
    orcl_sb - Physical standby database 

Fast-Start Failover:  Disabled

Configuration Status:
SUCCESS   (status updated 54 seconds ago)

Oracle Data Guard Broker New Features and Creating a CDB Standby Database via DBCA

$
0
0

Oracle 12c Release 2 introduced the ability to execute a DGMGRL command script via the @ command as well as the ability to call host operating system commands via the HOST command.

A number of new Data Guard Broker features have been introduced in Oracle 18c like the DGMGRL commands like VALIDATE DATABASE SPFILE, VALIDATE NETWORK CONFIGURATION, VALIDATE SPFILE,SET ECHO ON , SET TIME ON. In addition the DMON (Data Guard Monitor process) started by the Broker is also visible via the V$DATAGUARD_PROCESS view which was introduced in 12c Release 2.

In Oracle 19c we can export the broker configuration to be stored in a text file via the command EXPORT CONFIGURATION TO which then serves as a backup of the current broker configuration. We can then use the IMPORT CONFIGURATION command which enables us to import the broker configuration metadata that was previously exported via the EXPORT CONFIGURATION command.

This note describes the simple process of creating a Data Guard Physical Standby Database for a Multitenant Container Database using DBCA and using the Data Guard Broker scripting feature to execute a DGMGRL script to automate the creation of the Data Guard Broker configuration. We can see the use of some of the new features described above.

Use the DBCA -createAsStandby option to create a Data Guard physical standby database in a Multitenant database environment.
 
Environment:

  • host02: CDB1 – Primary Container Database
  • host03: CDB1_SB – Standby Container Database

 
Note that an auxiliary connection is automatically being made and the RMAN DUPLICATE command is being silently invoked by DBCA.
 


[oracle@host03 dbs]$ dbca -silent -createDuplicateDB -gdbName CDB1.localdomain -primaryDBConnectionString host02:1521/CDB1.localdomain -sid cdb1 -createAsStandby -dbUniqueName CDB1_SB
Enter SYS user password:

Prepare for db operation
22% complete
Listener config step
44% complete
Auxiliary instance creation
67% complete
RMAN duplicate
89% complete
Post duplicate database operations
100% complete

Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/CDB1_SB/CDB1_SB.log" for further details.

[oracle@host03 dbs]$ ls sp*
spfileCDB1.ora

[oracle@host03 dbs]$ export ORACLE_SID=CDB1

[oracle@host03 dbs]$ sqlplus sys as sysdba

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Jun 28 23:34:46 2019
Version 19.3.0.0.0

Copyright (c) 1982, 2019, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

SQL> show pdbs

CON_ID CON_NAME              OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
2 PDB$SEED              READ ONLY  NO
3 PDB1               READ ONLY  NO
4 PDB2               READ ONLY  NO
SQL>

 

If we examine the log file we can see that DBCA is internally running the DUPLICATE command to create the standby database.
 

...
...
duplicate target database
for standby
from active database
dorecover
nofilenamecheck
;
}
[progressPage.flowWorker] [ 2019-07-01 15:12:33.539 AWST ] [RMANEngine.done:1663]  Done called
[progressPage.flowWorker] [ 2019-07-01 15:12:33.539 AWST ] [RMANEngine.spoolOff:1549]  Setting spool off = /u01/app/oracle/cfgtoollogs/dbca/CDB1_SB/rmanUtil
[progressPage.flowWorker] [ 2019-07-01 15:12:33.539 AWST ] [RMANEngine.executeImpl:1294]  m_bExecQuery=false
[progressPage.flowWorker] [ 2019-07-01 15:12:33.539 AWST ] [RMANEngine.executeImpl:1302]  Command being written to rman process=exit;
...
...

 

We can execute a host operating system command directly from DGMGRL prompt. Here we examine the contents of the DGMGRL script file which we will use to create the Data Guard configuration.
 


DGMGRL> ! cat cre_data_guard_broker.cfg
Executing operating system command(s):" cat cre_data_guard_broker.cfg"
connect sys/G#vin2407@cdb1;
set time on;
set echo on;
-- #############################################;
-- Creating the Data Guard Broker Configuration;
-- #############################################;
create configuration 'cdb1_dg' as primary database is 'cdb1' connect identifier is 'cdb1';
add database 'cdb1_sb' as connect identifier is 'cdb1_sb';
enable configuration;
host sleep 30;
show database 'cdb1';
show database 'cdb1_sb';
-- #############################################;
-- Checking the Data Guard Broker Configuration;
-- #############################################;
show database 'cdb1' 'InconsistentProperties';
show database 'cdb1' 'InconsistentLogXptProps';
validate network configuration for all;
validate database 'cdb1_sb' spfile;

 

Create the Data Guard Broker configuration
 

DGMGRL> @cre_data_guard_broker.cfg
Connected to "cdb1"
Connected as SYSDBA.
-- #############################################;
-- Creating the Data Guard Broker Configuration;
-- #############################################;
Configuration "cdb1_dg" created with primary database "cdb1"
Database "cdb1_sb" added
Enabled.
Executing operating system command(s):" sleep 30;"

Database - cdb1

Role:               PRIMARY
Intended State:     TRANSPORT-ON
Instance(s):
cdb1

Database Status:
SUCCESS

Database - cdb1_sb

Role:               PHYSICAL STANDBY
Intended State:     APPLY-ON
Transport Lag:      0 seconds (computed 1 second ago)
Apply Lag:          0 seconds (computed 1 second ago)
Average Apply Rate: 32.00 KByte/s
Real Time Query:    ON
Instance(s):
cdb1

Database Status:
SUCCESS


-- #############################################;
-- Checking the Data Guard Broker Configuration;
-- #############################################;

INCONSISTENT PROPERTIES
INSTANCE_NAME        PROPERTY_NAME         MEMORY_VALUE         SPFILE_VALUE         BROKER_VALUE

INCONSISTENT LOG TRANSPORT PROPERTIES
INSTANCE_NAME         STANDBY_NAME        PROPERTY_NAME         MEMORY_VALUE         BROKER_VALUE

Connecting to instance "cdb1" on database "cdb1" ...
Connected to "cdb1"
Checking connectivity from instance "cdb1" on database "cdb1 to instance "cdb1" on database "cdb1_sb"...
Succeeded.
Connecting to instance "cdb1" on database "cdb1_sb" ...
Connected to "CDB1_SB"
Checking connectivity from instance "cdb1" on database "cdb1_sb to instance "cdb1" on database "cdb1"...
Succeeded.

Oracle Clusterware is not configured on database "cdb1".
Connecting to database "cdb1" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=host02.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=cdb1_DGMGRL.localdomain)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "cdb1".

Oracle Clusterware is not configured on database "cdb1_sb".
Connecting to database "cdb1_sb" using static connect identifier "(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=host03.localdomain)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=CDB1_SB_DGMGRL.localdomain)(INSTANCE_NAME=cdb1)(SERVER=DEDICATED)(STATIC_SERVICE=TRUE)))" ...
Succeeded.
The static connect identifier allows for a connection to database "cdb1_sb".

Connecting to "cdb1".
Connected to "cdb1"

Connecting to "cdb1_sb".
Connected to "CDB1_SB"


Parameter settings with different values:

audit_file_dest:
cdb1 (PRIMARY) : /u01/app/oracle/admin/cdb1/adump
cdb1_sb          : /u01/app/oracle/admin/CDB1_SB/adump

 

Export the Data Guard Broker configuration, then remove the configuration followed by an import of the broker configuration.
 

DGMGRL> EXPORT CONFIGURATION TO 'orcl_dgb.exp';
Succeeded.

DGMGRL> remove configuration;
Removed configuration

DGMGRL> show configuration;
ORA-16596: member not part of the Oracle Data Guard broker configuration

Configuration details cannot be determined by DGMGRL

DGMGRL> IMPORT CONFIGURATION FROM 'orcl_dgb.exp';
Succeeded. Run ENABLE CONFIGURATION to enable the imported configuration.

DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration;

Configuration - orcl_dg

  Protection Mode: MaxPerformance
  Members:
  orcl_sb - Primary database
    orcl    - Physical standby database 

Fast-Start Failover:  Disabled

Configuration Status:
ENABLED

Oracle 19c New Feature Automatic Indexing

$
0
0

Automatic Indexing is a new feature in Oracle 19c which automatically creates, rebuilds, and drops indexes in a database based on the application workload.

The index management task is now dynamically performed by the database itself via a task which executes in the background every 15 minutes.

Automatic indexing task analyzes the current workload and identifies candidates for indexes.

It then creates the indexes as invisible indexes and evaluates the identified candidate SQL statements. If the performance is improved then the indexes are made visible and can be then used by the application. If there is no improvement in the performance, then the indexes are marked as unusable and dropped after a predefined interval.

The automatic indexing feature is managed via the DBMS_AUTO_INDEX package.

Note that this feature is currently available only on the Oracle Engineered Systems platform.

Let us have a look at an example of this new Oracle 19c feature.
 
Enable automatic indexing for the DEMO schema, but create any new auto indexes only as invisible indexes
 

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','REPORT ONLY');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_SCHEMA','DEMO',TRUE);

PL/SQL procedure successfully completed.

 
We run a few queries against a table with 20 million rows – this table currently has no indexes
 

SQL> conn demo/demo
Connected.

SQL> select * from mysales where id=4711;

        ID       FLAG PRODUCT           CHANNEL_ID    CUST_ID AMOUNT_SOLD
---------- ---------- ----------------- ---------- ---------- -----------
ORDER_DAT SHIP_DATE
--------- ---------
      4711       4712 Samsung Galaxy S7          1        711        5000
08-JUL-19 14-JAN-05


SQL> select * from mysales where id=4713;

        ID       FLAG PRODUCT           CHANNEL_ID    CUST_ID AMOUNT_SOLD
---------- ---------- ----------------- ---------- ---------- -----------
ORDER_DAT SHIP_DATE
--------- ---------
      4713       4714 Samsung Galaxy S7          3        713        5000
08-JUL-19 16-JAN-05


SQL> select * from mysales where id=4715;

        ID       FLAG PRODUCT           CHANNEL_ID    CUST_ID AMOUNT_SOLD
---------- ---------- ----------------- ---------- ---------- -----------
ORDER_DAT SHIP_DATE
--------- ---------
      4715       4716 Samsung Galaxy S7          0        715        5000
08-JUL-19 18-JAN-05
..
..

 
Obtain information about the automatic indexing operations via the REPORT_ACTIVITY and REPORT_LAST_ACTIVITY functions of the DBMS_AUTO_INDEX package.

Because automatic indexing has been configured with the REPORT option, the indexes are created as INVISIBLE indexes.
 

SQL> SET LONG 1000000 PAGESIZE 0

SQL> SELECT DBMS_AUTO_INDEX.report_activity() FROM dual;

GENERAL INFORMATION
-------------------------------------------------------------------------------
 Activity start               : 08-JUL-2019 11:05:20
 Activity end                 : 09-JUL-2019 11:05:20
 Executions completed         : 4
 Executions interrupted       : 0
 Executions with fatal error  : 0
-------------------------------------------------------------------------------

SUMMARY (AUTO INDEXES)
-------------------------------------------------------------------------------
 Index candidates                              : 1
 Indexes created (visible / invisible)         : 1 (0 / 1)
 Space used (visible / invisible)              : 394.26 MB (0 B / 394.26 MB)
 Indexes dropped                               : 0
 SQL statements verified                       : 14
 SQL statements improved (improvement factor)  : 14 (167664.6x)
 SQL plan baselines created                    : 0
 Overall improvement factor                    : 167664.6x
-------------------------------------------------------------------------------

SUMMARY (MANUAL INDEXES)
-------------------------------------------------------------------------------
 Unused indexes    : 0
 Space used        : 0 B
 Unusable indexes  : 0
-------------------------------------------------------------------------------

INDEX DETAILS
-------------------------------------------------------------------------------
1. The following indexes were created:
-------------------------------------------------------------------------------
----------------------------------------------------------------------
| Owner | Table   | Index                | Key | Type   | Properties |
----------------------------------------------------------------------
| DEMO  | MYSALES | SYS_AI_bmqt0qthw74kg | ID  | B-TREE | NONE       |
----------------------------------------------------------------------
-------------------------------------------------------------------------------

VERIFICATION DETAILS
-------------------------------------------------------------------------------
1. The performance of the following statements improved:
-------------------------------------------------------------------------------
 Parsing Schema Name  : DEMO

 SQL ID               : 06wuaj97jms49

 SQL Text             : select * from mysales where id=4713

 Improvement Factor   : 167667x


Execution Statistics:
-----------------------------
                    Original Plan                 Auto Index Plan
                    ----------------------------  ----------------------------
 Elapsed Time (s):  379501                        3634
 CPU Time (s):      377495                        854
 Buffer Gets:       167667                        4
 Optimizer Cost:    45698                         4
 Disk Reads:        0                             2
 Direct Writes:     0                             0
 Rows Processed:    1                             1
 Executions:        1                             1


PLANS SECTION
--------------------------------------------------------------------------------
-------------

- Original
-----------------------------
 Plan Hash Value  : 3597614299

--------------------------------------------------------------------------------

| Id | Operation                   | Name    | Rows | Bytes | Cost  | Time     |

--------------------------------------------------------------------------------

|  0 | SELECT STATEMENT            |         |      |       | 45698 |          |

|  1 |   TABLE ACCESS STORAGE FULL | MYSALES |    1 |    56 | 45698 | 00:00:02 |

--------------------------------------------------------------------------------


- With Auto Indexes
-----------------------------
 Plan Hash Value  : 2047064025

--------------------------------------------------------------------------------
-----------------------
| Id  | Operation                             | Name                 | Rows | By
tes | Cost | Time     |
--------------------------------------------------------------------------------
-----------------------
|   0 | SELECT STATEMENT                      |                      |    1 |
 56 |    4 | 00:00:01 |
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | MYSALES              |    1 |
 56 |    4 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | SYS_AI_bmqt0qthw74kg |    1 |
    |    3 | 00:00:01 |
--------------------------------------------------------------------------------
-----------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("ID"=4713)


Notes
-----
- Dynamic sampling used for this statement ( level = 11 )

...
...
...

 
By reviewing the generated Automatic Indexing report we can now configure automatic indexing to create any new auto indexes as VISIBLE indexes so that they can be used in SQL statements.

We can allocate a dedicated tablespace to store any automatic indexes which will be created and we can also stipulate a quota in the tablespace which can be used for creating any automatic indexes.
 

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_DEFAULT_TABLESPACE','TEST_IND');

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_AUTO_INDEX.CONFIGURE('AUTO_INDEX_MODE','IMPLEMENT');

PL/SQL procedure successfully completed.

 
We now execute the same queries once again against the 20 million row MYSALES table.

Now a new index has been automatically created – note the execution plan.

The index has also been created in assigned tablespace for automatic indexes.
 

SQL> select * from mysales where id=4711;
      4711       4712 Samsung Galaxy S7          1        711        5000 08-JUL-19 14-JAN-05

SQL> select * from table (dbms_xplan.display_cursor);
SQL_ID  fc177w86zpdbb, child number 1
-------------------------------------
select * from mysales where id=4711

Plan hash value: 2047064025

------------------------------------------------------------------------------------------------------------
| Id  | Operation                           | Name                 | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                    |                      |       |       |     4 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID BATCHED| MYSALES              |     1 |    56 |     4   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN                  | SYS_AI_bmqt0qthw74kg |     1 |       |     3   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("ID"=4711)



SQL> select owner,tablespace_name from dba_indexes
    where index_name='SYS_AI_bmqt0qthw74kg';

OWNER
--------------------------------------------------------------------------------
TABLESPACE_NAME
------------------------------
DEMO
TEST_DATA

 
Generate the report on automatic indexing (by default for the past 24 hour period).

Note now that the report shows that the indexes have been created as visible indexes.
 

SQL> SELECT DBMS_AUTO_INDEX.report_activity() FROM dual;

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
GENERAL INFORMATION
-------------------------------------------------------------------------------
 Activity start               : 08-JUL-2019 11:41:53
 Activity end                 : 09-JUL-2019 11:41:53
 Executions completed         : 6
 Executions interrupted       : 0
 Executions with fatal error  : 0
-------------------------------------------------------------------------------

SUMMARY (AUTO INDEXES)
-------------------------------------------------------------------------------

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
 Index candidates                              : 1
 Indexes created (visible / invisible)         : 1 (1 / 0)
 Space used (visible / invisible)              : 394.26 MB (394.26 MB / 0 B)
 Indexes dropped                               : 0
 SQL statements verified                       : 14
 SQL statements improved (improvement factor)  : 14 (167664.6x)
 SQL plan baselines created                    : 0
 Overall improvement factor                    : 167664.6x
-------------------------------------------------------------------------------

SUMMARY (MANUAL INDEXES)

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
-------------------------------------------------------------------------------
 Unused indexes    : 0
 Space used        : 0 B
 Unusable indexes  : 0
-------------------------------------------------------------------------------

INDEX DETAILS
-------------------------------------------------------------------------------
1. The following indexes were created:
-------------------------------------------------------------------------------
----------------------------------------------------------------------

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
| Owner | Table   | Index                | Key | Type   | Properties |
----------------------------------------------------------------------
| DEMO  | MYSALES | SYS_AI_bmqt0qthw74kg | ID  | B-TREE | NONE       |
----------------------------------------------------------------------
-------------------------------------------------------------------------------

VERIFICATION DETAILS
-------------------------------------------------------------------------------
1. The performance of the following statements improved:
-------------------------------------------------------------------------------
 Parsing Schema Name  : DEMO

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------

 SQL ID               : 06wuaj97jms49

 SQL Text             : select * from mysales where id=4713

 Improvement Factor   : 167667x


Execution Statistics:
-----------------------------
                    Original Plan                 Auto Index Plan

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
                    ----------------------------  ----------------------------
 Elapsed Time (s):  379501                        3634
 CPU Time (s):      377495                        854
 Buffer Gets:       167667                        4
 Optimizer Cost:    45698                         4
 Disk Reads:        0                             2
 Direct Writes:     0                             0
 Rows Processed:    1                             1
 Executions:        1                             1



DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
PLANS SECTION
--------------------------------------------------------------------------------
-------------

- Original
-----------------------------
 Plan Hash Value  : 3597614299

--------------------------------------------------------------------------------

| Id | Operation                   | Name    | Rows | Bytes | Cost  | Time     |

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------

--------------------------------------------------------------------------------

|  0 | SELECT STATEMENT            |         |      |       | 45698 |          |

|  1 |   TABLE ACCESS STORAGE FULL | MYSALES |    1 |    56 | 45698 | 00:00:02 |

--------------------------------------------------------------------------------


- With Auto Indexes

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
-----------------------------
 Plan Hash Value  : 2047064025

--------------------------------------------------------------------------------
-----------------------
| Id  | Operation                             | Name                 | Rows | By
tes | Cost | Time     |
--------------------------------------------------------------------------------
-----------------------
|   0 | SELECT STATEMENT                      |                      |    1 |
 56 |    4 | 00:00:01 |

DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------
|   1 |   TABLE ACCESS BY INDEX ROWID BATCHED | MYSALES              |    1 |
 56 |    4 | 00:00:01 |
| * 2 |    INDEX RANGE SCAN                   | SYS_AI_bmqt0qthw74kg |    1 |
    |    3 | 00:00:01 |
--------------------------------------------------------------------------------
-----------------------

Predicate Information (identified by operation id):
------------------------------------------
* 2 - access("ID"=4713)


DBMS_AUTO_INDEX.REPORT_ACTIVITY()
--------------------------------------------------------------------------------

Notes
-----
- Dynamic sampling used for this statement ( level = 11 )


Applying the July 2019 Database Release Update Patch

$
0
0

This note describes the steps used to apply the latest July 2019 Database Release Update Patch for Oracle 12c Release 2.

The environment is single instance Multitenant database.

Notes

1)Review MOS note Master Note for Database Proactive Patch Program (Doc ID 756671.1)

2)Download patch p29699168_122010_Linux-x86-64.zip – COMBO OF OJVM RU COMPONENT 12.2.0.1.190716 + 12.2.0.1.190716 DBJUL2019RU

3)Make sure the opatch version is minimum 12.2.0.1.17

4)Patch 29699168 comprises two patches:

Patch 29774415 (OJVM RELEASE UPDATE: 12.2.0.1.190716)
Patch 29757449 (DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716)

Apply patch 29774415
 

[oracle@host02 OPatch]$ pwd
/u01/app/oracle/patch/29699168/29774415

[oracle@host02 29774415]$ $ORACLE_HOME/OPatch/opatch apply 
Oracle Interim Patch Installer version 12.2.0.1.17
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.17
OUI version       : 12.2.0.1.4
Log file location : /u02/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2019-07-23_20-48-53PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   29774415  

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u02/app/oracle/product/12.2.0/dbhome_1')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '29774415' to OH '/u02/app/oracle/product/12.2.0/dbhome_1'

Patching component oracle.javavm.server, 12.2.0.1.0...

Patching component oracle.javavm.server.core, 12.2.0.1.0...

Patching component oracle.rdbms.dbscripts, 12.2.0.1.0...

Patching component oracle.javavm.client, 12.2.0.1.0...

Patching component oracle.rdbms, 12.2.0.1.0...
Patch 29774415 successfully applied.
Log file location: /u02/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2019-07-23_20-48-53PM_1.log

OPatch succeeded.

 
Apply patch 29757449
 

[oracle@host02 29699168]$ cd 29757449/
[oracle@host02 29757449]$ $ORACLE_HOME/OPatch/opatch apply 
Oracle Interim Patch Installer version 12.2.0.1.17
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.17
OUI version       : 12.2.0.1.4
Log file location : /u02/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2019-07-23_20-50-52PM_1.log

Verifying environment and performing prerequisite checks...

--------------------------------------------------------------------------------
Start OOP by Prereq process.
Launch OOP...

Oracle Interim Patch Installer version 12.2.0.1.17
Copyright (c) 2019, Oracle Corporation.  All rights reserved.


Oracle Home       : /u02/app/oracle/product/12.2.0/dbhome_1
Central Inventory : /u01/app/oraInventory
   from           : /u02/app/oracle/product/12.2.0/dbhome_1/oraInst.loc
OPatch version    : 12.2.0.1.17
OUI version       : 12.2.0.1.4
Log file location : /u02/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2019-07-23_20-51-08PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   29757449  

Do you want to proceed? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u02/app/oracle/product/12.2.0/dbhome_1')


Is the local system ready for patching? [y|n]
Y (auto-answered by -silent)
User Responded with: Y
Backing up files...
Applying interim patch '29757449' to OH '/u02/app/oracle/product/12.2.0/dbhome_1'
ApplySession: Optional component(s) [ oracle.has.crs, 12.2.0.1.0 ] , [ oracle.ons.daemon, 12.2.0.1.0 ] , [ oracle.oid.client, 12.2.0.1.0 ] , [ oracle.network.cman, 12.2.0.1.0 ] , [ oracle.jdk, 1.8.0.211.12 ]  not present in the Oracle Home or a higher version is found.

Patching component oracle.rdbms.rsf, 12.2.0.1.0...

Patching component oracle.rdbms, 12.2.0.1.0...

Patching component oracle.network.rsf, 12.2.0.1.0...

Patching component oracle.rdbms.util, 12.2.0.1.0...

Patching component oracle.ctx, 12.2.0.1.0...

Patching component oracle.ctx.rsf, 12.2.0.1.0...

Patching component oracle.ldap.rsf, 12.2.0.1.0...

Patching component oracle.rdbms.dbscripts, 12.2.0.1.0...

Patching component oracle.rdbms.rsf.ic, 12.2.0.1.0...

Patching component oracle.sdo, 12.2.0.1.0...

Patching component oracle.sdo.locator, 12.2.0.1.0...

Patching component oracle.tfa, 12.2.0.1.0...

Patching component oracle.xdk.rsf, 12.2.0.1.0...

Patching component oracle.rdbms.rman, 12.2.0.1.0...

Patching component oracle.assistants.deconfig, 12.2.0.1.0...

Patching component oracle.xdk, 12.2.0.1.0...

Patching component oracle.precomp.rsf, 12.2.0.1.0...

Patching component oracle.sqlplus, 12.2.0.1.0...

Patching component oracle.assistants.acf, 12.2.0.1.0...

Patching component oracle.ldap.rsf.ic, 12.2.0.1.0...

Patching component oracle.rdbms.oci, 12.2.0.1.0...

Patching component oracle.rdbms.dv, 12.2.0.1.0...

Patching component oracle.rdbms.crs, 12.2.0.1.0...

Patching component oracle.rdbms.install.plugins, 12.2.0.1.0...

Patching component oracle.oraolap, 12.2.0.1.0...

Patching component oracle.install.deinstalltool, 12.2.0.1.0...

Patching component oracle.oracore.rsf, 12.2.0.1.0...

Patching component oracle.xdk.parser.java, 12.2.0.1.0...

Patching component oracle.sqlplus.ic, 12.2.0.1.0...

Patching component oracle.ldap.client, 12.2.0.1.0...

Patching component oracle.nlsrtl.rsf, 12.2.0.1.0...

Patching component oracle.rdbms.lbac, 12.2.0.1.0...

Patching component oracle.assistants.server, 12.2.0.1.0...

Patching component oracle.ons, 12.2.0.1.0...

Patching component oracle.rdbms.deconfig, 12.2.0.1.0...

Patching component oracle.precomp.common, 12.2.0.1.0...

Patching component oracle.precomp.lang, 12.2.0.1.0...

Patching component oracle.ctx.atg, 12.2.0.1.0...

OPatch found the word "error" in the stderr of the make command.
Please look at this stderr. You can re-run this make command.
Stderr output:
chmod: changing permissions of ‘/u02/app/oracle/product/12.2.0/dbhome_1/bin/extjobO’: Operation not permitted
make: [iextjob] Error 1 (ignored)


Patch 29757449 successfully applied.
OPatch Session completed with warnings.
Log file location: /u02/app/oracle/product/12.2.0/dbhome_1/cfgtoollogs/opatch/opatch2019-07-23_20-51-08PM_1.log

OPatch completed with warnings.

 
Verify patch application
 

[oracle@host02 29757449]$ $ORACLE_HOME/OPatch/opatch lspatches
29757449;Database Jul 2019 Release Update : 12.2.0.1.190716 (29757449)
29774415;OJVM RELEASE UPDATE: 12.2.0.1.190716 (29774415)

OPatch succeeded.

 
Applying datapatch to CDB and PDB’s
 

[oracle@host02 dbhome_1]$ cd OPatch
[oracle@host02 OPatch]$ ./datapatch -verbose
SQL Patching tool version 12.2.0.1.0 Production on Tue Jul 23 20:58:14 2019
Copyright (c) 2012, 2019, Oracle.  All rights reserved.

Log file for this invocation: /u02/app/oracle/cfgtoollogs/sqlpatch/sqlpatch_24799_2019_07_23_20_58_14/sqlpatch_invocation.log

Connecting to database...OK
Note:  Datapatch will only apply or rollback SQL fixes for PDBs
       that are in an open state, no patches will be applied to closed PDBs.
       Please refer to Note: Datapatch: Database 12c Post Patch SQL Automation
       (Doc ID 1585822.1)
Bootstrapping registry and package to current versions...done
Determining current state...done

Current state of SQL patches:
Patch 29774415 (OJVM RELEASE UPDATE: 12.2.0.1.190716 (29774415)):
  Installed in the binary registry only
Bundle series DBRU:
  ID 190716 in the binary registry and not installed in any PDB

Adding patches to installation queue and performing prereq checks...
Installation queue:
  For the following PDBs: CDB$ROOT PDB$SEED PDB1
    Nothing to roll back
    The following patches will be applied:
      29774415 (OJVM RELEASE UPDATE: 12.2.0.1.190716 (29774415))
      29757449 (DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716)

Installing patches...
Patch installation complete.  Total patches installed: 6

Validating logfiles...
Patch 29774415 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/29774415/22954229/29774415_apply_CDB1_CDBROOT_2019Jul23_20_58_47.log (no errors)
Patch 29757449 apply (pdb CDB$ROOT): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/29757449/23009673/29757449_apply_CDB1_CDBROOT_2019Jul23_20_59_42.log (no errors)
Patch 29774415 apply (pdb PDB$SEED): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/29774415/22954229/29774415_apply_CDB1_PDBSEED_2019Jul23_21_04_13.log (no errors)
Patch 29757449 apply (pdb PDB$SEED): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/29757449/23009673/29757449_apply_CDB1_PDBSEED_2019Jul23_21_04_43.log (no errors)
Patch 29774415 apply (pdb PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/29774415/22954229/29774415_apply_CDB1_PDB1_2019Jul23_21_04_13.log (no errors)
Patch 29757449 apply (pdb PDB1): SUCCESS
  logfile: /u02/app/oracle/cfgtoollogs/sqlpatch/29757449/23009673/29757449_apply_CDB1_PDB1_2019Jul23_21_04_31.log (no errors)
SQL Patching tool complete on Tue Jul 23 21:08:07 2019
[oracle@host02 OPatch]$ 


SQL> select  patch_id,version,status,description  
  from dba_registry_sqlpatch;

  PATCH_ID VERSION		STATUS
---------- -------------------- -------------------------
DESCRIPTION
--------------------------------------------------------------------------------
  29774415 12.2.0.1		SUCCESS
OJVM RELEASE UPDATE: 12.2.0.1.190716 (29774415)

  29757449 12.2.0.1		SUCCESS
DATABASE JUL 2019 RELEASE UPDATE 12.2.0.1.190716


Viewing all 232 articles
Browse latest View live