Wednesday, September 4, 2019

ORA-03297: file contains used data beyond requested RESIZE value

ORA-03297: file contains used data beyond requested RESIZE value
At some stage a DBA will encounter the following error when resizing a datafile:

ORA-03297: file contains used data beyond requested RESIZE value

What may be more frustrating is that the datafile has plenty of free space.

So what can be done to solve this error and resize down the datafile? Why does it occur? Some might be tempted to just add more space to the filesystem and be done with it. Not everyone has that option.

A datafile cannot be resized down below the High Water Mark (HWM) and that's what causes the ORA-03297 error. . In the lifetime of the datafile some extents have been created bumping up the HWM. Others below the HWM have also been DELETED. 

I will show an example of a datafile encountering this error, what is happening inside the datafile and how to resolve the error. Let's create a tablespace first and the two tables, one with many rows and a second with a few rows.

CREATE TABLESPACE "TESTTBLSPC" DATAFILE '/oradata1/TESTDB//TESTTBLSPCdatafile01.dbf' SIZE 32M
AUTOEXTEND ON NEXT 1310720 MAXSIZE 64M
LOGGING ONLINE PERMANENT BLOCKSIZE 8192
EXTENT MANAGEMENT LOCAL AUTOALLOCATE DEFAULT NOCOMPRESS SEGMENT SPACE MANAGEMENT AUTO

Tablespace created.

SQL> create table myuser.testtab1 tablespace TESTTBLSPC as
(select level id,dbms_random.string('a',dbms_random.value(1,4000)) str
from dual
connect by level < 20000
)
/
Table created.

SQL> create table myuser.testtab2 tablespace TESTTBLSPC as
(select level id,dbms_random.string('a',dbms_random.value(1,4000)) str
from dual
connect by level < 10
)
/

Table created.

How many MB is now in use by these tables?
select segment_name, bytes/1024/1024
from dba_segments
where segment_name in ('testtab1','testtab2')
and owner = 'MYUSER'
/

SEGMENT_NAME                    BYTES/1024/1024
-----------------------------------------------
testtab1                          54
testtab2                          .0625


To demonstrate how these two tables fit in the TESTTBLSPC tablespace I want to firstly show a few things about the datafile.

DFSIZEMB = the datafile size in MB on disk.
HWMMB = the location of the HWM in the datafile.
DFREEMB = the number of FREE MB in the datafile.
%FREE = the percentage of FREE space in the datafile.
RESIZEABLE = the amount of MB that can be gained from resizing.

Use the below SQL to see how much space can be gained from a RESIZE and the location of the HWM in a datafile. For a database with many datafiles this query may take a while.

QUERY 1 - datafile free space, HWM location, resizeable MB

column file_name format a50
column tablespace_name format a20
select
            tablespace_name,
            file_id,
            file_name,
            dfsizeMB,
            hwmMB,
            dffreeMB,
            trunc((dffreeMB/dfsizeMB)*100,2) "% Free",
            trunc(dfsizeMB-hwmMB,2) "Resizeble"
   from
   (
      select
           df.tablespace_name tablespace_name,
           df.file_id file_id,
           df.file_name file_name,
           df.bytes/1024/1024 dfsizeMB,
           trunc((ex.hwm*(ts.block_size))/1024/1024,2) hwmMB,
           dffreeMB
      from
           dba_data_files df,
           dba_tablespaces ts,
      (
           select file_id, sum(bytes/1024/1024) dffreeMB
           from dba_free_space
           group by file_id
      ) free,
      (
           select file_id, max(block_id+blocks) hwm
           from dba_extents
           group by file_id
      ) ex
      where df.file_id = ex.file_id
      and df.tablespace_name = ts.tablespace_name
      and df.file_id = free.file_id (+)
      order by df.tablespace_name, df.file_id
    )
/
TABLESPACE_NAME  FILE_ID File Name                            DFSIZEMB     HWMMB   DFFREEMB     % Free  Resizeble
-------------------- ---------- ------------------------------ ---------- ---------- ---------- ---------- ----------...
 TESTTBLSPC             14      /oradata1/TESTDB/TESTTBLSPCdatafile01.dbf        55.75      55.06      .6875       1.23        .69
...
Above, the datafile can only be resized by 0.69MB and the HWM is at the 55MB mark with the datafile size on disk being 55.75MB.

How do these two tables look like within the datafile? NOTE: This query may take a while for a large datafile with lots of extents.

QUERY 2 - location of segments within a datafile

column segment heading 'Segment Name' format a14
column file_name heading 'File Name' format a40
column segment_type heading 'Segment Type' format a10
select
                file_name,
                segment_type,
                owner||'.'||segment_name segment,
                block_id,
                blockIdMB
        from
        (
         select
              ex.owner owner,
              ex.segment_name segment_name,
              ex.segment_type segment_type,
              ex.block_id block_id,
              df.file_name file_name,
              trunc((ex.block_id*(ts.block_size))/1024/1024,2) blockIdMB
      from
              dba_extents ex, dba_data_files df, dba_tablespaces ts
              where df.file_id = &file_id
              and df.file_id = ex.file_id
              and df.tablespace_name = ts.tablespace_name
              order by ex.block_id desc
      )
      where rownum <= 100
/

Enter value for file_id: 14

File Name                         Segment Ty Segment Name     BLOCK_ID  BLOCKIDMB
--------------------------------- ---------- -------------- ---------- ----------
/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf   TABLE      MYUSER.testtab2        7040        55
/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf   TABLE      MYUSER.testtab1        6912        54 


We can see table testtab1 ends around the 54MB block in the datafile and testtab2 starts around the 55MB block.

So we now have two tables in the TESTTBLSPC tablespace created one after the other. Further, how about we create a situation where there is alot of free space in the datafile and cannot be resized resulting in "ORA-03297: file contains used data beyond requested RESIZE value". DELETING all the rows from the testtab1 table will not free up any space and will still have it allocated to testtab1, so let's TRUNCATE the table testtab1 and run the same SQL above again to see the result of the datafile.
SQL> truncate table myuser.testtab1;

Table truncated.
TABLESPACE_NAME         FILE_ID File Name                                            DFSIZEMB      HWMMB   DFFREEMB     % Free  Resizeble
-------------------- ---------- -------------------------------------------------- ---------- ---------- ---------- ---------- ----------
TESTTBLSPC                         14 /oradata1/TESTDB/TESTTBLSPCdatafile01.dbf                  55.75      55.06     54.625      97.98        .69
The datafile now has 97% and 54MB FREE space. The HWM is still at the 55MB mark. Inspecting the datafile, testtab1 is at block 1MB and testtab2 is still at of course 55MB.
File Name                         Segment Ty Segment Name     BLOCK_ID  BLOCKIDMB
--------------------------------- ---------- -------------- ---------- ----------
/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf   TABLE      MYUSER.testtab2        7040        55
/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf   TABLE      MYUSER.testtab1         128         1

So seeing we have 54MB FREE space in a 55MB datafile let's try to resize it down to 50MB.

SQL> alter database datafile '/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf' resize 50M;
alter database datafile '/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf' resize 50M
*
ERROR at line 1:
ORA-03297: file contains used data beyond requested RESIZE value


The only way to resize the datafile is to lower the HWM. This is achieved by moving the extents located at the HWM either to another tablespace or the same tablespace. For the same tablespace ensure plenty of free space and run QUERY 2 after each MOVE to see if anything is achieved.

Of course for a production environment where there are many users accessing the table, index, LOB etc it might not be that easy and this sort of activity would need to be carefully planned, especially with LOB segments. Also, when moving a table the underlying INDEXES are marked UNUSABLE and need to be rebuilt.

SQL> alter table myuser.testtab2 move tablespace TESTTBLSPC;

Table altered.
Now the datafile shows that testtab2 is at block 1MB.

File Name                           Segment Ty Segment Name     BLOCK_ID  BLOCKIDMB
----------------------------------- ---------- -------------- ---------- ----------
/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf   TABLE      MYUSER.testtab2         136       1.06
/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf   TABLE      MYUSER.testtab1         128          1

What about the HWM?
TABLESPACE_NAME   FILE_ID File Name                                            DFSIZEMB      HWMMB   DFFREEMB     % Free  Resizeble
-------------------- ---------- -------------------------------------------------- ---------- ---------- ---------- ---------- ----------

TESTTBLSPC              14 /oradata1/TESTDB/TESTTBLSPCdatafile01.dbf                  55.75       1.12     54.625      97.98      54.63

Now the HWM is at the 1.12 MB mark and there is 54MB of reclaimable space in the datafile.

SQL> alter database datafile '/oradata1/TESTDB/TESTTBLSPCdatafile01.dbf' resize 2M;

Database altered.

Datafile now resized! In a production environment there may be many different objects extents near the HWM and simply moving one table might not be so common. It's best to see how many different objects are around the HWM and calculate how much space can be reclaimed via the MOVE. Unless there is a benefit in reclaiming a large amount of space, or moving tables, indexes, LOBs etc is something that just has to be done, then if possible get more space added to the filesystem.

Friday, August 30, 2019

Restore RMAN backup to different server with different database name


Here we will restore oracle database from backup to another server and do the database clone from rman hot backup.



We have taken RMAN backup of PROD and  will  restore rman backup to different server with different database name.
Lets take two servers as below:
SERVER1—DBPROD
SERVER2---DBTEST

ON SERVER1:



DB Name: DBPROD

–Take backup of
DBPROD with archivelog,controlfile and spfile.
–Copy all backupsets from SERVER1(Location: /data1/bak/TEST/) to SERVER2(Location:/u01/backup/TEST/) , So that we will restore oracle database from cold backup.

ON SERVER2:

–Create pfile for database
DBTEST.
DB Name: DBTEST

#export ORACLE_SID=TEST
#rman target /
RMAN>startup nomount;
RMAN>restore controlfile from '/u01/backup/backup_db_c-300166359-20120316-00';   --- complete path of backup
RMAN>alter database mount;

–Since backup location on SERVER2 is different from SERVER1, so we have to use catalog command to 

update new backup location in controlfile.
Use below command for all backuppieces:


RMAN> CATALOG START WITH '/my/backup/location';
Or
RMAN>catalog backuppiece '/u01/backup/ora_df_020412_21_14_NSNDB_1635_1';
RMAN>catalog backuppiece '/u01/backup/ora_df_020412_21_14_NSNDB_1636_1';
RMAN>catalog backuppiece '/u01/backup/ora_arch_230412_13_43_NSNDB_23_1';
RMAN>catalog backuppiece '/u01/backup/ora_arch_230412_13_43_NSNDB_24_1';

–Since datafile location on SERVER2 is different from SERVER1, So we have to update controlfile for same with SET NEWNAME clause:



Now restore database backup on SERVER2:

RMAN>
run
{
set newname for datafile '/data02/oradata/system01.dbf' to '/u01/oradata/testbak/system01.dbf';
set newname for datafile '/data02/oradata/undotbs01.dbf' to '/u01/oradata/ testbak /undotbs1.dbf';
set newname for datafile '/data02/oradata/sysaux01.dbf' to '/u01/oradata/ testbak /sysaux01.dbf';
set newname for datafile '/data02/oradata/users01.dbf' to '/u01/oradata/ testbak /users01.dbf';
restore database;
switch datafile all;
}
RMAN>exit;

–Also update redo log file in controlfile on SERVER2:

#sqlplus "/as sysdba"
SQL>alter database rename file '/data02/oradata/redo07.log' to '/u01/oradata/ testbak /redo07.log';
SQL>alter database rename file '/data02/oradata/redo08.log' to '/u01/oradata/ testbak /redo08.log';
SQL>alter database rename file '/data02/oradata//redo09.log' to '/u01/oradata/ testbak /redo09.log';
SQL>exit;

#rman target /

RMAN>
run
{
recover database;
alter database open resetlogs; }
RMAN>exit;
#sqlplus "/as sysdba"

SQL>alter database tempfile '/u01/home/oracle/PROD/tempfiles/temp01_PROD.dbf' drop including datafiles;

sql>ALTER TABLESPACE TEMP ADD TEMPFILE '/u01/oradata/nsnbak/temp01.dbf' size 200m reuse/size 500M;

sql>select name from v$tempfile;


To change DB name follow the below steps:-

To change the database name in addition to the DBID, specify the 
DBNAME parameter. This example changes the name to test_db:
Ensure that the target database is mounted but not open, and that it was shut down consistently prior to mounting. For example:
SHUTDOWN IMMEDIATE
STARTUP MOUNT
$nid TARGET=SYS/password DBNAME=test_db

SQL>
ALTER DATABASE OPEN RESETLOGS;

After that modify the parameter file as db_name

Thursday, August 1, 2019

RMAN backup status check

This script will be run in the target database(not in the catalog).

Login as sysdba and issue the following script:

This script will report on all currently running RMAN backups like full, incremental & archivelog backups:

Script-1) Below script will give you RMAN backup status along with start and stop timing.

SELECT
    session_key,
    input_type,
    status,
    TO_CHAR(
        start_time,
        'mm/dd/yy hh24:mi'
    ) start_time,
    TO_CHAR(
        end_time,
        'mm/dd/yy hh24:mi'
    ) end_time,
    elapsed_seconds / 3600 hrs
FROM
    v$rman_backup_job_details
ORDER BY start_time DESC;


STEP-2 Below script will give you SID, Total Work, Sofar & % of completion.

SELECT
    sid,
    serial#,
    context,
    sofar,
    totalwork,
    round(
        sofar / totalwork * 100,
        2
    ) "% COMPLETE"
FROM
    v$session_longops
WHERE
        opname LIKE 'RMAN%'
    AND
        opname NOT LIKE '%aggregate%'
    AND
        totalwork != 0
    AND
        sofar <> totalwork;


STEP-3 Below script is used for historical backup status  :

SELECT
    session_key,
    input_type,
    status,
    TO_CHAR(
        start_time,
        'mm-dd-yyyy hh24:mi:ss'
    ) AS rman_bkup_start_time,
    TO_CHAR(
        end_time,
        'mm-dd-yyyy hh24:mi:ss'
    ) AS rman_bkup_end_time,
    elapsed_seconds / 3600 hours
FROM
    v$rman_backup_job_details
ORDER BY start_time desc;

Tuesday, July 23, 2019

ORACLE RMAN Backup types

RMAN - Sample Backup Scripts 10g


• Backup up Whole Database Backups with RMAN

• Backing up Individual Tablespaces with RMAN
• Backing up Individual Datafiles and Datafile Copies with RMAN
• Backing up Control Files with RMAN
• Backing up Server Parameter Files with RMAN
• Backing up Archived Redo Logs with RMAN
• Backing up the Whole database including archivelogs
=====================================================================================

Making Whole Database Backups with RMAN


You can perform whole database backups with the database mounted or open. To perform a whole database backup from the RMAN prompt the BACKUP DATABASE command can be used. The simplest form of the command requires no parameters, as shown in this example:

RMAN> backup database;
In the following example no backup location was specified meaning that the backups will automatically be placed in the Flash Recovery Area (FRA). If the FRA has not been setup then all backups default to $ORACLE_HOME/dbs.

How to check if the RFA has been setup:

SQL> show parameter recovery_file_dest

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /recovery_area
db_recovery_file_dest_size big integer 50G

If your FRA is not setup (ie values are null) please refer to the following note for assistance in setting it up.

Note 305648.1 What is a Flash Recovery Area and how to configure it ?

If you wish to place your backup outside the FRA then following RMAN syntax may be used.

RMAN> backup database format '/backups/PROD/df_t%t_s%s_p%p';

• Backing Up Individual Table-spaces with RMAN

RMAN allows individual tablespaces to be backed up with the database in open or mount stage.

RMAN> backup tablespace SYSTEM, UNDOTBS, USERS;

• Backing Up Individual Datafiles and Datafile Copies with RMAN

The flexibilty of being able to backup a single datafile is also available. As seen below you are able to reference the datafile via the file# or file name. Multiple datafiles can be backed up at a time.

RMAN> backup datafile 2;

RMAN> backup datafile 2 format '/backups/PROD/df_t%t_s%s_p%p';

RMAN> backup datafile 1,2,3,6,7,8;

RMAN> backup datafile '/oradata/system01.dbf';

• Backing Up the current controlfile & Spfile 

The controlfile and spfile are backed up in similar ways. Whenever a full database backup if performed, the controlfile and spfile are backed up. In fact whenever file#1 is backed up these two files are backed up also.

It is also good practise to backup the controlfile especially after tablespaces and datafiles have been added or deleted.

If you are not using an RMAN catalog it is more impotant that you frequently backup of your controlfile. You can also configure another method of controlfile backup which is referred to as 'autobackup of controlfile'.

Refer to the manual for more information regarding this feature.

RMAN> backup current controlfile;

RMAN> backup current controlfile format '/backups/PROD/df_t%t_s%s_p%p';

RMAN> backup spfile;

• Backing Up Archivelogs

It is important that archivelogs are backed up in a timely manner and correctly removed to ensure the file system does not fill up. Below are a few different examples. Option one backs up all archive logs to the FRA or default location. Option two backs up all archivelogs generate between 7 and 30 days and option three backs up archive logs from log sequence number XXX until logseq YYY then deletes the archivelogs. It also backups the archive logs to a specified location.

RMAN> backup archivelog all;

RMAN> backup archivelog from time 'sysdate-30' until time 'sysdate-7';

RMAN> backup archivelog from logseq=XXX until logseq=YYY delete input format '/backups/PROD/%d_archive_%T_%u_s%s_p%p';

• Backing up the Whole database including archivelogs

Below is an example of how the whole database can be backed up and at the same time backup the archive logs and purge them following a successful backup. The first example backups up to the FRA, however it you wish to redirect the output the second command shows how this is achieved.

RMAN> backup database plus archivelog delete input;

RMAN> backup database plus archivelog delete input format '/backups/PROD/df_t%t_s%s_p%p';

Monday, July 22, 2019

Processes & Sessions

How to increase PROCESSES initialization parameter


ORA-00020 maximum number of processes exceeded

ORA-00020 maximum number of processes exceeded
Cause: All process state objects are in use.
Action: Increase the value of the PROCESSES initialization parameter.

ORA-00020 comes under "Oracle Database Server Messages". These messages are generated by the Oracle database server when running any Oracle program.

How to increase PROCESSES initialization parameter:

1.    Login as sysdba
    sqlplus / as sysdba
  
2. Check Current Setting of Parameters
    sql> show parameter sessions
    sql> show parameter processes
    sql> show parameter transactions

3.    If you are planning to increase "PROCESSES" parameter you should also plan to increase "sessions and "transactions" parameters
    A basic formula for determining  these parameter values is as follows:
  
        processes=x
        sessions=x*1.1+5
        transactions=sessions*1.1
      
4.    These paramters can't be modified in memory. You have to modify the spfile only (scope=spfile) and bounce the instance.
    sql> alter system set processes=500 scope=spfile;
    sql> alter system set sessions=555 scope=spfile;
    sql> alter system set transactions=610 scope=spfile;
    sql> shutdown abort
    sql> startup

Friday, February 22, 2019

Uninstall the oracle software

STEP-1) Before uninstalling the oracle software, make sure you have deleted the oracle databases and if any services running from the ORACLE_HOME

STEP-2)Below methods describes about uninstallation of oracle software:

1)Using uninstall tool with runInstaller
2)Using uninstall tool under Oracle home location
3)Manual uninstall

Method 1:
Remember when you install oracle software, you use runInstaller from the installation. Using same runInstaller can be used to uninstall oracle software as well.

Go to runInstaller location and run below statement
./runInstaller -deinstall -home /u01/app/oracle/product/12.1.0/dbhome_1/

Method 2:
Under ORACLE_HOME location, we get a deinstall utility that you can use to remove the oracle software.
cd $ORACLE_HOME/deinstall/deinstall

Method 3:
Sometimes ORACLE software installation gets corrupted and in such case, above deinstallation utilities will not help you to remove oracle software. Use below method to remove the oracle home using Linux commands.

Stop all the oracle databases as well as processes running from the ORACLE_HOME.

Delete ORACLE_HOME
*** please be cautious while you are  using rm -Rf command
cd $ORACLE_HOME
rm -Rf *
Delete ORACLE_BASE

cd $ORACLE_BASE
rm -Rf *
Remove oratab file
rm /etc/oratab

Oracle 12cR2 silent installation

Step-1 : download theOracle 12cR2 software files, copied it to Linux server and unziped the files.
Step-2: Create response file under /tmp location with below details 
vi /tmp/12cR2_response_silentinstall.rsp

Note:  Please replace ORACLE_HOSTNAME, ORACLE_HOME and ORACLE_BASE location in the below file

oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.2.0
oracle.install.option=INSTALL_DB_SWONLY
ORACLE_HOSTNAME= ip-172-3-16-9
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=dba
oracle.install.db.OSBACKUPDBA_GROUP=dba
oracle.install.db.OSDGDBA_GROUP=dba
oracle.install.db.OSKMDBA_GROUP=dba
oracle.install.db.OSRACDBA_GROUP=dba
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
oracle.installer.autoupdates.option=SKIP_UPDATES

STEP -3 : Go to 12cR2 installation software location and run the runInstaller in silent mode
cd  <12c software copied location>/database

STEP -4 :  run below script for silent installation.
./runInstaller -ignoreSysPrereqs -showProgress -silent -responseFile /tmp/12cR2_response_silentinstall.rsp

HOW to FIND long RUNNING TRANSACTIONS in ORACLE database?

SELECT t.start_time,
  s.sid,
  s.serial#,
  s.username,
  s.status,
  s.schemaname,
  s.osuser,
  s.process,
  s.machine,
  s.terminal,
  s.program,
  s.module,
  s.type,
  TO_CHAR(s.logon_time,'DD/MON/YY HH24:MI:SS') logon_time
FROM v$transaction t,
  v$session s
WHERE s.saddr = t.ses_addr
AND s. status = 'ACTIVE'
AND s.username IS NOT NULL
ORDER BY start_time desc;

Thursday, February 21, 2019

DATA GUARD Monitoring Scripts


Starting Redo Apply on standby database

Without Real Time Apply (archives applying)on standby database:

alter database recover managed standby database disconnect from session;

with real time apply (redolog):

alter database recover managed standby database using current logfile disconnect

Stopping Redo Apply on standby database:

alter database recover managed standby database cancel;

Monitoring Redo Apply on Physical Standby Database:
SELECT arch.thread# "Thread",
  arch.sequence# "LastSequenceReceived",
  appl.sequence# "LastSequenceApplied",
  (arch.sequence# - appl.sequence#) "Difference"
FROM
  (SELECT thread#,
    sequence#
  FROM v$archived_log
  WHERE (thread#, first_time) IN
    (SELECT thread#, MAX(first_time) FROM v$archived_log GROUP BY thread#
    )
  ) arch,
  (SELECT thread#,
    sequence#
  FROM v$log_history
  WHERE (thread#, first_time) IN
    (SELECT thread#, MAX(first_time) FROM v$log_history GROUP BY thread#
    )
  ) appl
WHERE arch.thread# = appl.thread#;


Standby database process status
select distinct process, status, thread#, sequence#, block#, blocks from v$managed_standby ;

If using real time apply
select TYPE, ITEM, to_char(TIMESTAMP, 'DD-MON-YYYY HH24:MI:SS') from v$recovery_progress where ITEM='Last Applied Redo';

or
select recovery_mode from v$archive_dest_status where dest_id=1;

Troubleshooting Log transport services
1) Verify that the primary database is in archive log mode and has automatic
archiving enabled:

select log_mode from v$database;

2) Verify that the sufficient space exist in the local archive destination as
well as all destinations marked as mandatory. The following query can be
used to determine all local and mandatory destinations that need to be
checked:

select dest_id,destination from v$archive_dest
where schedule=’ACTIVE’
and (binding=’MANDATORY’ or target=’PRIMARY’);

3) Determine if the last log switch to any remote destinations resulted in an
error. Immediately following a log switch run the following query:

select dest_id,status,error from v$archive_dest
where target=’STANDBY’;

Address any errors that are returned in the error column. Perform a log
switch and re-query to determine if the issue has been resolved.

4) Determine if any error conditions have been reached by querying the
v$dataguard_status view (view only available in 9.2.0 and above):

select message, to_char(timestamp,’HH:MI:SS’) timestamp
from v$dataguard_status
where severity in (‘Error’,’Fatal’)
order by timestamp

5) Gather information about how the remote destinations are performing the
archival:

select dest_id,archiver,transmit_mode,affirm,net_timeout,delay_mins,async_blocks
from v$archive_dest where target=’STANDBY’

6) Run the following query to determine the current sequence number, the last
sequence archived, and the last sequence applied to a standby:

select ads.dest_id,
max(sequence#) “Current Sequence”,
max(log_sequence) “Last Archived”,
max(applied_seq#) “Last Sequence Applied”
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
where ad.dest_id=al.dest_id
and al.dest_id=ads.dest_id
group by ads.dest_id

If you are remotely archiving using the LGWR process then the archived
sequence should be one higher than the current sequence. If remotely
archiving using the ARCH process then the archived sequence should be equal
to the current sequence. The applied sequence information is updated at
log switch time.



Troubleshooting Redo Apply services
1. Verify that the last sequence# received and the last sequence# applied to
standby database by running the following query:

select max(al.sequence#) “Last Seq Recieved”,
max(lh.sequence#) “Last Seq Applied”
from v$archived_log al, v$log_history lh

If the two numbers are the same then the standby has applied all redo sent
by the primary. If the numbers differ by more than 1 then proceed to step
2.

2. Verify that the standby is in the mounted state:

select open_mode from v$database;

3. Determine if there is an archive gap on your physical standby database by
querying the V$ARCHIVE_GAP view as shown in the following query:

select * from v$archive_gap;

The V$ARCHIVE_GAP fixed view on a physical standby database only returns
the next gap that is currently blocking redo apply from continuing. After
resolving the identified gap and starting redo apply, query the
V$ARCHIVE_GAP fixed view again on the physical standby database to
determine the next gap sequence, if there is one. Repeat this process
until there are no more gaps.



If v$archive_gap does’nt exists:

with prod as (select max(sequence#) as seq from v_$archived_log where RESETLOGS_TIME = (select RESETLOGS_TIME from v_$database)), stby as (select max(sequence#) as seq,dest_id dest_id from v_$archived_log where first_change# > (select resetlogs_change# from v_$database) and applied = ‘YES’ and dest_id in (1,2) group by dest_id) select prod.seq-stby.seq,stby.dest_id from prod, stby

4. Verify that managed recovery is running:

select process,status from v$managed_standby;

When managed recovery is running you will see an MRP process. If you do not see an MRP process then start managed recovery by issuing the following
command:

recover managed standby database disconnect;

Some possible statuses for the MRP are listed below:

ERROR – This means that the process has failed. See the alert log or v$dataguard_status for further information.

WAIT_FOR_LOG – Process is waiting for the archived redo log to be completed. Switch an archive log on the primary and query v$managed_standby to see if the status changes to APPLYING_LOG.

WAIT_FOR_GAP – Process is waiting for the archive gap to be resolved. Review the alert log to see if FAL_SERVER has been called to resolve the gap.

APPLYING_LOG – Process is applying the archived redo log to the standby database.à



Troubleshooting SQL Apply services
1. Verify that log apply services on the standby are currently running.

To verify that logical apply is currently available to apply changes perform the following query:

SQL> SELECT PID, TYPE, STATUS, HIGH_SCN
2> FROM V$LOGSTDBY;

When querying the V$LOGSTDBY view, pay special attention to the HIGH_SCN column. This is an activity indicator. As long as it is changing each time you query the V$LOGSTDBY view, progress is being made. The STATUS column
gives a text description of the current activity.

If the query against V$LOGSTDBY returns no rows then logical apply is not running. Start logical apply by issuing the following statement:

SQL> alter database start logical standby apply;

If the query against V$LOGSTDBY continues to return no rows then proceed to step 2.

2. To determine if there is an archive gap in your dataguard configuration;  query the DBA_LOGSTDBY_LOG view on the logical standby database.

SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) “SEQ#”,
2> FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, ‘HH:MI:SS’) TIMESTAMP,
3> DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) “THR#”
4> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE ‘/u01/oradata/arch/1_57.arc’;

After you register these logs on the logical standby database, you can restart log apply services. The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL apply operations from continuing. After resolving the identified gap and starting log apply services, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

6. Determine is logical apply is receiving errors while performing apply operations.

Log apply services cannot apply unsupported DML statements, DDL statements and Oracle supplied packages to a logical standby database in SQL apply mode. When an unsupported statement or package is encountered, SQL apply
operations stop. To determine if SQL apply has stopped due to errors you should query the DBA_LOGSTDBY_EVENTS view. When querying the view, select the columns in order by EVENT_TIME. This ordering ensures that a shutdown
failure appears last in the view. For example:

SQL> SELECT XIDUSN, XIDSLT, XIDSQN, STATUS, STATUS_CODE
2> FROM DBA_LOGSTDBY_EVENTS
3> WHERE EVENT_TIME =
4> (SELECT MAX(EVENT_TIME)
5> FROM DBA_LOGSTDBY_EVENTS);

If an error requiring database management occurred (such as adding a tablespace, datafile, or running out of space in a tablespace), then you can fix the problem manually and resume SQL apply.

If an error occurred because a SQL statement was entered incorrectly,conflicted with an existing object, or violated a constraint then enter the correct SQL statement and use the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure
to ensure that the incorrect statement is ignored the next time SQL apply operations are run.

7. Query DBA_LOGSTDBY_PROGRESS to verify that log apply services is progressing.

The DBA_LOGSTDBY_PROGRESS view describes the progress of SQL apply operations on the logical standby databases. For example:

SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME,
2> NEWEST_SCN, NEWEST_TIME
3> FROM DBA_LOGSTDBY_PROGRESS;

The APPLIED_SCN indicates that committed transactions at or below that SCN have been applied. The NEWEST_SCN is the maximum SCN to which data could be applied if no more logs were received. This is usually the MAX(NEXT_CHANGE#)-1
from DBA_LOGSTDBY_LOG when there are no gaps in the list. When the value of NEWEST_SCN and APPLIED_SCN are the equal then all available changes have been applied. If you APPLIED_SCN is below NEWEST_SCN and is increasing then
SQL apply is currently processing changes.

8. Verify that the table that is not receiving rows is not listed in the DBA_LOGSTDBY_UNSUPPORTED.

The DBA_LOGSTDBY_USUPPORTED view lists all of the tables that contain datatypes not supported by logical standby databases in the current release. These tables are not maintained (will not have DML applied) by the logical
standby database. Query this view on the primary database to ensure that those tables necessary for critical applications are not in this list. If the primary database includes unsupported tables that are critical, consider using a physical standby database.

Wednesday, February 20, 2019

Run a SQL Tuning Advisor For A given Sql_id


When we run SQL tuning advisor against a SQL statement or sql_id, it provides tuning recommendations that can be done that query to improve performance.
 It might give suggestion to create few indexes or accepting a SQL profile.

Diagnostic and Tuning license is required to use this feature

In this below tutorial we will explain how to run sql tuning advisor against sql_Id.

Suppose the sql id is – dtj3d4das6a9a

a) Create Tuning Task

DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          sql_id      => 'dtj3d4das6a9a',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 500,
                          task_name   => 'dtj3d4das6a9a_tuning_task11',
                          description => 'Tuning task1 for statement dtj3d4das6a9a');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

b). Executing Tuning task:
begin DBMS_SQLTUNE.execute_tuning_task(task_name => 'dtj3d4das6a9a_tuning_task11'); end;

c). Get the Tuning advisor report.
select dbms_sqltune.report_tuning_task('dtj3d4das6a9a_tuning_task11') from dual;

d). Get list of tuning task present in database:
We can get the list of tuning tasks present in database from DBA_ADVISOR_LOG

SELECT TASK_NAME, STATUS FROM DBA_ADVISOR_LOG WHERE TASK_NAME ;

e).Drop a tuning task:
execute dbms_sqltune.drop_tuning_task('dtj3d4das6a9a_tuning_task11');

What if the sql_id is not present in the cursor , but present in AWR snap?
SQL_ID =0u676p5cvfxz4

First we need to find the begin snap and end snap of the sql_id.
select a.instance_number inst_id, a.snap_id,a.plan_hash_value, to_char(begin_interval_time,'dd-mon-yy hh24:mi') btime, abs(extract(minute from (end_interval_time-begin_interval_time)) + extract(hour from (end_interval_time-begin_interval_time))*60 + extract(day from (end_interval_time-begin_interval_time))*24*60) minutes,
executions_delta executions, round(ELAPSED_TIME_delta/1000000/greatest(executions_delta,1),4) "avg duration (sec)" from dba_hist_SQLSTAT a, dba_hist_snapshot b
where sql_id='&sql_id' and a.snap_id=b.snap_id
and a.instance_number=b.instance_number
order by snap_id desc, a.instance_number;

From here we can get the begin snap and end snap of the sql_id.

begin_snap -> 870
end_snap -> 910

1. Create the tuning task:
DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          begin_snap  => 870,
                          end_snap    => 910,
                          sql_id      => '0u676p5cvfxz4',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 60,
                          task_name   => '0u676p5cvfxz4_AWR_tuning_task',
                          description => 'Tuning task for statement 0u676p5cvfxz4  in AWR');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/
 2. Execute the tuning task:
 EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '0u676p5cvfxz4_AWR_tuning_task');
 3. Get the tuning task recommendation report
 SELECT DBMS_SQLTUNE.report_tuning_task('0u676p5cvfxz4_AWR_tuning_task') AS recommendations FROM dual;

Drop SQL Baselines In Oracle



1.Get the sql_handle and sql_baseline name of the sql_id:


SELECT sql_handle, plan_name FROM dba_sql_plan_baselines WHERE signature IN ( SELECT exact_matching_signature FROM gv$sql WHERE sql_id='&SQL_ID')

SQL_HANDLE                                    PLAN_NAME
--------------------------------------------- ----------------------------------------------------
SQL_a7ac813cbf25e65f                          SQL_PLAN_agb417kzkbtkz479e6372

2. Drop the baseline:

SQL> select sql_handle,plan_name from dba_sql_plan_baselines where plan_name='SQL_PLAN_agb417kzkbtkz479e6372';

SQL_HANDLE                                    PLAN_NAME
--------------------------------------------- -------------------------------------------------------------------
SQL_a7ac813cbf25e65f                          SQL_PLAN_agb417kzkbtkz479e6372


declare
drop_result pls_integer;
begin
drop_result := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
sql_handle => 'SQL_a7ac813cbf25e65f',
plan_name => 'SQL_PLAN_agb417kzkbtkz479e6372');
dbms_output.put_line(drop_result);
end;
/

PL/SQL procedure successfully completed.

SQL> select sql_handle,plan_name from dba_sql_plan_baselines where plan_name='SQL_PLAN_agb417kzkbtkz479e6372';

no rows selected

A  sql_handle can have multiple sql baselines attached, So if you want to drop all the sql baselines of that handle, then drop the sql handle without adding plan_name.


declare
 drop_result pls_integer;
 begin
 drop_result := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
 sql_handle => 'SQL_a7ac813cbf25e65f');
 dbms_output.put_line(drop_result); 
 end;
/


Using SQL Plan Baseline to make optimizer to choose better execution plan without changing the Query/adding the hints.

Being a DBA, we might face a question, asking for tuning an SQL statement without changing the code or adding hints or adding/removing joins.Lets see how we can achieve this.Here, I will  demonstrate an example of forcing the optimizer to change the execution plan of a SQL statement without changing the SQL itself.

PROCEDURE-1)  I have created two tables EMPLOYEE and DEPARTMENT having EMP_ID and   DEPT_ID respectively as primary keys and I am creating an index on DEPT_ID  column in EMPLOYEE table and it is foreign key to DEPARTMENT.
PROCEDURE-2) The SQL statement in the example is from a large batch jobs running daily in an     Oracle database. During the tuning process it was found that adding a composite index to a big table can significantly improve the query performance. However this is a “popular” table in the database and is being used by many different modules and processes. To maintain the stability of the system we only want the new index being used by the tuned SQL, not any other SQL statements.To achieve this purpose we create the index as INVISIBLE so it is not used by the optimizer for any other SQL 
statement. For this SQL statement we add USE_INVISIBLE_INDEX hint so that the index is only used by the optimizer for this particular SQL. The problem is we are not allowed to change the code.Therefore adding the hint to the original SQL is not feasible. In order to force the original SQL statement to use an execution plan in which the invisible index is used, we use an Oracle database feature named SQL Plan Baseline, which was introduced in 11g. We can create plan baseline for the original SQL statement and for the one modified with the hint added. Then we replace the plan baseline of the original SQL with the one of the modified. Next time the original SQL runs the optimizer will use the execution plan from the modified with hint. Therefore the invisible index is used for this SQL.Oracle SQL Plan Management ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones.


STEP-1)creating table and inserting some records
drop table department;
create table department(dept_id number primary key, dept_name char(100));
drop table EMPLOYEE;
create table EMPLOYEE(emp_id number primary key, emp_name char(100), dept_id number references department(dept_id));
create index empidx1 on EMPLOYEE(emp_id);
insert into department select rownum, 'DEPARTMENT'||rownum from all_objects;
insert into EMPLOYEE select rownum, 'dept'||rownum, dept_id from department;
update EMPLOYEE set dept_id = 500 where dept_id > 100;

STEP-2) Gather table stats
begin dbms_stats.gather_table_stats (USER, 'EMPLOYEE', cascade=> true); end;
begin dbms_stats.gather_table_stats (USER, 'department', cascade=> true); end;

STEP-3)let us have a look at the undesirable plan which does not use the index.
select emp_name, dept_name
    from EMPLOYEE c, department p
    where c.dept_id = p.dept_id
    and c.dept_id = :dept_id;

select * from table (dbms_xplan.display_cursor());

 SQL_ID  0u676p5cvfxz4, child number 0
-------------------------------------
select emp_name, dept_name     from EMPLOYEE c, department p     where
c.dept_id = p.dept_id     and c.dept_id = :dept_id

Plan hash value: 341203176

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   479 (100)|          |
|   1 |  NESTED LOOPS                |              |    19 |  4370 |   479   (1)| 00:00:06 |
|   2 |   TABLE ACCESS BY INDEX ROWID| DEPARTMENT   |     1 |   115 |     1   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0023547 |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | EMPLOYEE     |    19 |  2185 |   478   (1)| 00:00:06 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("P"."DEPT_ID"=TO_NUMBER(:DEPT_ID))
   4 - filter("C"."DEPT_ID"=TO_NUMBER(:DEPT_ID))

Note
-----
   - dynamic sampling used for this statement (level=2)
 
STEP-4) Load above undesirable plan into baseline to establish a SQL plan baseline for this query into which the desired plan will be loaded later.
 DECLARE
  n1 NUMBER;
BEGIN
  n1 := dbms_spm.load_plans_from_cursor_cache(sql_id => '0u676p5cvfxz4');
END;

 select  sql_text, sql_handle, plan_name, enabled
  from     dba_sql_plan_baselines
where lower(sql_text) like   '%emp_name%';

STEP-5)Disable undesirable plan so that this plan will not be used

declare
    cnt number;
   begin
   cnt := dbms_spm.alter_sql_plan_baseline (
    SQL_HANDLE => 'SQL_c17c9f7d83124502',
    PLAN_NAME => 'SQL_PLAN_c2z4zgq1j4j823ed2aa92',
    ATTRIBUTE_NAME => 'enabled',
    ATTRIBUTE_VALUE => 'NO');
    end;


 check enabled is NO 
select  sql_text, sql_handle, plan_name, enabled
    from     dba_sql_plan_baselines
    where lower(sql_text) like   '%emp_name%';

STEP-6)  Now we use hint in the above SQL to generate the optimal plan which uses index

select /*+ index(e)*/ emp_name, dept_name
 from EMPLOYEE e, department d
where e.dept_id = d.dept_id
    and e.dept_id = :dept_id;
 
select * from table (dbms_xplan.display_cursor());

STEP-7)Now we will load the hinted plan  into baseline, Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement

   
DECLARE
cnt NUMBER;
BEGIN
  cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'dtj3d4das6a9a', plan_hash_value => 2379270125, sql_handle => 'SQL_c17c9f7d83124502');
END;


STEP-8)Verify that there are now two plans loaded for that SQL statement:
 Unhinted sub-optimal plan is disabled  Hinted optimal plan which even though is for a  “different query,”  can work with earlier unhinted query (SQL_HANDLE is same)  is enabled.

 select  sql_text, sql_handle, plan_name, enabled
    from     dba_sql_plan_baselines
    where lower(sql_text) like   '%emp_name%';


STEP-9)Verify that hinted plan is used even though we do not use hint in the query.The note confirms that baseline has been used for this statement

   select emp_name, dept_name
    from EMPLOYEE c, department p
    where c.dept_id = p.dept_id
    and c.dept_id = :dept_id; 
 
   select * from table (dbms_xplan.display_cursor());
  select * FROM table(dbms_xplan.display_cursor(format=>'typical +peeked_binds'));

 select  sql_text, sql_handle, plan_name, enabled
    from     dba_sql_plan_baselines
    where lower(sql_text) like   '%emp_name%';
 
    select plan_table_output from table(dbms_xplan.display('plan_table',null,'typical -cost -bytes'));

Summary: Using this method, you can swap  the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying  an optimizer setting, playing around with statistics etc. and
then associate sub-optimally performing statement with the optimal plan.


PROCEDURE-2) Generate SQL Plan Baseline for the Original SQL
In an 11g database, by default Oracle does not collect SQL plan baselines automatically unless you set init.ora parameter optimizer_capture_sql_plan_baseline to TRUE. So if the plan baseline does not exist for the original SQL statement, we need to generate it.

1) Create a SQL tuning set. Give it a name and description that suit your situation.

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(
sqlset_name => '0u676p5cvfxz4_tuning_set',
description =>  ‘Shadow Process’);
END;
/
2) If the SQL statement was run recently, get the starting and ending AWR snapshot numbers for the time period when the SQL was run. Also using the SQL ID, get the plan hash value from DBA_HIST_SQLSTAT and DBA_HIST_SNAPSHOT views。

3) Load the tuning set with the execution plan extracted from AWR, using the AWR snapshot numbers, the SQL_ID and the plan hash value.

DECLARE
cur SYS_REFCURSOR;
BEGIN
OPEN cur FOR
SELECT value(p)
FROM TABLE(DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(
           begin_snap => 469,
           end_snap => 472,
           basic_filter => 'sql_id = ''0u676p5cvfxz4''
                            and plan_hash_value = 341203176')
           ) p;
DBMS_SQLTUNE.LOAD_SQLSET('0u676p5cvfxz4_tuning_set', cur);
CLOSE cur;
END;
/
4) Create SQL plan baseline from the loaded SQL tuning set

DECLARE
my_plans PLS_INTEGER;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( sqlset_name =>'0u676p5cvfxz4_tuning_set');
END;
/
5) Check the newly created plan baseline

select  sql_handle, plan_name, origin, enabled, accepted, fixed, sql_text, created, last_executed
from dba_sql_plan_baselines
where created > sysdate -1/24
order by sql_handle, plan_name;
SQL_HANDLE                PLAN_NAME                      ORIGIN         ENA ACC FIX CREATED
------------------------- ------------------------------ -------------- --- --- --- -------------------
SQL_ef88a476fc38c5af      SQL_PLAN_fz254fvy3jjdgc4138c40 MANUAL-LOAD    YES YES NO  13-MAY-16 10.57.36.000000 AM
Modify SQL Statement and Generate Its Plan Baseline

1) Add USE_INVISIBLE_INDEX hint to the original SQL statement.

2) Change session parameter to catch plan baseline automatically

ALTER SESSION SET optimizer_capture_sql_plan_baselines = TRUE;
3) Catch SQL plan baseline of the modified SQL by running it twice.

4) Check the plan baseline to make sure it is caught.

select  sql_handle, plan_name, origin, enabled, accepted, fixed, sql_text, created, last_executed
from dba_sql_plan_baselines
where created > sysdate -1/24
order by sql_handle, plan_name;
SQL_HANDLE                PLAN_NAME                      ORIGIN         ENA ACC FIX CREATED
------------------------- ------------------------------ -------------- --- --- --- -------------------
SQL_a7ac813cbf25e65f      SQL_PLAN_agb417kzkbtkz479e6372 AUTO-CAPTURE   YES YES NO  13-MAY-16 11.06.22.000000 AM

SQL_ef88a476fc38c5af      SQL_PLAN_fz254fvy3jjdgc4138c40 MANUAL-LOAD    YES YES NO  13-MAY-16 10.57.36.000000 AM
5) Get SQL_ID of the modified SQL

select distinct sql_id, plan_hash_value, sql_text
from v$sql
where sql_text like ‘%USE_INVISIBLE_INDEX%';
SQL_ID        PLAN_HASH_VALUE SQL_TEXT
------------- --------------- -------------------------------------------------------------------------

dtj3d4das6a9a       544808499 SELECT /*+ USE_INVISIBLE_INDEXES INDEX (OKLS IDX_COLL_OKL_S_01) USE_NL ( XICO OKLS )...
Create an Accepted Plan Baseline for the Original SQL Using that of Modified SQL
Now we have two newly created SQL plan baselines, one for the original SQL statement and the other for the modified SQL with hint. And we know the performance of the later is much better than the former. So we want Oracle to use the execution plan from the SQL with hint (modified SQL) when the original SQL is run from the application. To achieve this, we need to create a new SQL plan baseline for the original SQL and make it ACCEPTED. Following PL/SQL block will do the task. Here SQL_ID and PLAN_HASH_VALUE are from modified SQL statement. PLAN_HANDLE is the one of the original SQL, into which the plan baseline should be implemented. Note here we also make this plan baseline FIXED, meaning the optimizer will give preference to it over non-FIXED plans.

set serveroutput on
DECLARE
v_cnt PLS_INTEGER;
BEGIN
v_cnt := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE (
SQL_ID => 'dtj3d4das6a9a ',
PLAN_HASH_VALUE => 544808499,
SQL_HANDLE => ' SQL_ef88a476fc38c5af ',
FIXED => 'YES',
ENABLED => 'YES');
DBMS_OUTPUT.PUT_LINE ('Plan loaded: '||v_cnt);
END;
/
Now check the SQL plan baselines again to verify a new baseline is indeed created for the original SQL.

SQL_HANDLE                PLAN_NAME                      ORIGIN         ENA ACC FIX CREATED
------------------------- ------------------------------ -------------- --- --- --- -------------------
SQL_a7ac813cbf25e65f      SQL_PLAN_agb417kzkbtkz479e6372 AUTO-CAPTURE   YES YES NO  13-MAY-16 11.06.22.000000 AM

SQL_ef88a476fc38c5af      SQL_PLAN_fz254fvy3jjdg479e6372 MANUAL-LOAD    YES YES YES 13-MAY-16 11.15.02.000000 AM
                          SQL_PLAN_fz254fvy3jjdgc4138c40 MANUAL-LOAD    YES YES NO  13-MAY-16 10.57.36.000000 AM
Run explain plan for the original SQL statement, you should see following line included in the result:

- SQL plan baseline "SQL_PLAN_fz254fvy3jjdg479e6372" used for this statement
Summary
By using SQL Plan Baseline we can force the optimizer to use the execution plan for a modified SQL (added hint).

Sunday, February 17, 2019

Configure DB Console


PreReq:




Set environments (with example values):



export ORACLE_HOME=/oracle/11.2.0.4


export ORACLE_SID=orcl


export ORACLE_HOSTNAME=dbserver




Create the repository and configure the DB Console:


% emca -config dbcontrol db -repos create




parameters:


- database SID


- listener's port


- password for SYS


- password for SYSMAN


- password for DBSNMP




Drop the repository and deconfig the DB Console:


% emca -deconfig dbcontrol db -repos drop




parameters:


- database SID


- listener's port


- password for SYS


- password for SYSMAN




Drop the DB Console (manually):


Warning: this command puts the database in Quiesce Mode for the DB Control Releases 10.x to 11.1.x. 


Starting with DB Control Release 11.2.x, the database is no longer put in quiesce mode.


SQL> conn / as sysdba 


SQL> drop user sysman cascade;


SQL> drop role MGMT_USER;


SQL> drop user MGMT_VIEW cascade;


SQL> drop public synonym MGMT_TARGET_BLACKOUTS;


SQL> drop public synonym SETEMVIEWUSERCONTEXT;


% emca -deconfig dbcontrol db -repos drop




Link for using DBConsole: 


https://servername:port/em