Friday, February 22, 2019

Uninstall the oracle software

STEP-1) Before uninstalling the oracle software, make sure you have deleted the oracle databases and if any services running from the ORACLE_HOME

STEP-2)Below methods describes about uninstallation of oracle software:

1)Using uninstall tool with runInstaller
2)Using uninstall tool under Oracle home location
3)Manual uninstall

Method 1:
Remember when you install oracle software, you use runInstaller from the installation. Using same runInstaller can be used to uninstall oracle software as well.

Go to runInstaller location and run below statement
./runInstaller -deinstall -home /u01/app/oracle/product/12.1.0/dbhome_1/

Method 2:
Under ORACLE_HOME location, we get a deinstall utility that you can use to remove the oracle software.
cd $ORACLE_HOME/deinstall/deinstall

Method 3:
Sometimes ORACLE software installation gets corrupted and in such case, above deinstallation utilities will not help you to remove oracle software. Use below method to remove the oracle home using Linux commands.

Stop all the oracle databases as well as processes running from the ORACLE_HOME.

Delete ORACLE_HOME
*** please be cautious while you are  using rm -Rf command
cd $ORACLE_HOME
rm -Rf *
Delete ORACLE_BASE

cd $ORACLE_BASE
rm -Rf *
Remove oratab file
rm /etc/oratab

Oracle 12cR2 silent installation

Step-1 : download theOracle 12cR2 software files, copied it to Linux server and unziped the files.
Step-2: Create response file under /tmp location with below details 
vi /tmp/12cR2_response_silentinstall.rsp

Note:  Please replace ORACLE_HOSTNAME, ORACLE_HOME and ORACLE_BASE location in the below file

oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.2.0
oracle.install.option=INSTALL_DB_SWONLY
ORACLE_HOSTNAME= ip-172-3-16-9
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oraInventory
SELECTED_LANGUAGES=en
ORACLE_HOME=/u01/app/oracle/product/12.2.0/dbhome_1
ORACLE_BASE=/u01/app/oracle
oracle.install.db.InstallEdition=EE
oracle.install.db.OSDBA_GROUP=dba
oracle.install.db.OSOPER_GROUP=dba
oracle.install.db.OSBACKUPDBA_GROUP=dba
oracle.install.db.OSDGDBA_GROUP=dba
oracle.install.db.OSKMDBA_GROUP=dba
oracle.install.db.OSRACDBA_GROUP=dba
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
oracle.installer.autoupdates.option=SKIP_UPDATES

STEP -3 : Go to 12cR2 installation software location and run the runInstaller in silent mode
cd  <12c software copied location>/database

STEP -4 :  run below script for silent installation.
./runInstaller -ignoreSysPrereqs -showProgress -silent -responseFile /tmp/12cR2_response_silentinstall.rsp

HOW to FIND long RUNNING TRANSACTIONS in ORACLE database?

SELECT t.start_time,
  s.sid,
  s.serial#,
  s.username,
  s.status,
  s.schemaname,
  s.osuser,
  s.process,
  s.machine,
  s.terminal,
  s.program,
  s.module,
  s.type,
  TO_CHAR(s.logon_time,'DD/MON/YY HH24:MI:SS') logon_time
FROM v$transaction t,
  v$session s
WHERE s.saddr = t.ses_addr
AND s. status = 'ACTIVE'
AND s.username IS NOT NULL
ORDER BY start_time desc;

Thursday, February 21, 2019

DATA GUARD Monitoring Scripts


Starting Redo Apply on standby database

Without Real Time Apply (archives applying)on standby database:

alter database recover managed standby database disconnect from session;

with real time apply (redolog):

alter database recover managed standby database using current logfile disconnect

Stopping Redo Apply on standby database:

alter database recover managed standby database cancel;

Monitoring Redo Apply on Physical Standby Database:
SELECT arch.thread# "Thread",
  arch.sequence# "LastSequenceReceived",
  appl.sequence# "LastSequenceApplied",
  (arch.sequence# - appl.sequence#) "Difference"
FROM
  (SELECT thread#,
    sequence#
  FROM v$archived_log
  WHERE (thread#, first_time) IN
    (SELECT thread#, MAX(first_time) FROM v$archived_log GROUP BY thread#
    )
  ) arch,
  (SELECT thread#,
    sequence#
  FROM v$log_history
  WHERE (thread#, first_time) IN
    (SELECT thread#, MAX(first_time) FROM v$log_history GROUP BY thread#
    )
  ) appl
WHERE arch.thread# = appl.thread#;


Standby database process status
select distinct process, status, thread#, sequence#, block#, blocks from v$managed_standby ;

If using real time apply
select TYPE, ITEM, to_char(TIMESTAMP, 'DD-MON-YYYY HH24:MI:SS') from v$recovery_progress where ITEM='Last Applied Redo';

or
select recovery_mode from v$archive_dest_status where dest_id=1;

Troubleshooting Log transport services
1) Verify that the primary database is in archive log mode and has automatic
archiving enabled:

select log_mode from v$database;

2) Verify that the sufficient space exist in the local archive destination as
well as all destinations marked as mandatory. The following query can be
used to determine all local and mandatory destinations that need to be
checked:

select dest_id,destination from v$archive_dest
where schedule=’ACTIVE’
and (binding=’MANDATORY’ or target=’PRIMARY’);

3) Determine if the last log switch to any remote destinations resulted in an
error. Immediately following a log switch run the following query:

select dest_id,status,error from v$archive_dest
where target=’STANDBY’;

Address any errors that are returned in the error column. Perform a log
switch and re-query to determine if the issue has been resolved.

4) Determine if any error conditions have been reached by querying the
v$dataguard_status view (view only available in 9.2.0 and above):

select message, to_char(timestamp,’HH:MI:SS’) timestamp
from v$dataguard_status
where severity in (‘Error’,’Fatal’)
order by timestamp

5) Gather information about how the remote destinations are performing the
archival:

select dest_id,archiver,transmit_mode,affirm,net_timeout,delay_mins,async_blocks
from v$archive_dest where target=’STANDBY’

6) Run the following query to determine the current sequence number, the last
sequence archived, and the last sequence applied to a standby:

select ads.dest_id,
max(sequence#) “Current Sequence”,
max(log_sequence) “Last Archived”,
max(applied_seq#) “Last Sequence Applied”
from v$archived_log al, v$archive_dest ad, v$archive_dest_status ads
where ad.dest_id=al.dest_id
and al.dest_id=ads.dest_id
group by ads.dest_id

If you are remotely archiving using the LGWR process then the archived
sequence should be one higher than the current sequence. If remotely
archiving using the ARCH process then the archived sequence should be equal
to the current sequence. The applied sequence information is updated at
log switch time.



Troubleshooting Redo Apply services
1. Verify that the last sequence# received and the last sequence# applied to
standby database by running the following query:

select max(al.sequence#) “Last Seq Recieved”,
max(lh.sequence#) “Last Seq Applied”
from v$archived_log al, v$log_history lh

If the two numbers are the same then the standby has applied all redo sent
by the primary. If the numbers differ by more than 1 then proceed to step
2.

2. Verify that the standby is in the mounted state:

select open_mode from v$database;

3. Determine if there is an archive gap on your physical standby database by
querying the V$ARCHIVE_GAP view as shown in the following query:

select * from v$archive_gap;

The V$ARCHIVE_GAP fixed view on a physical standby database only returns
the next gap that is currently blocking redo apply from continuing. After
resolving the identified gap and starting redo apply, query the
V$ARCHIVE_GAP fixed view again on the physical standby database to
determine the next gap sequence, if there is one. Repeat this process
until there are no more gaps.



If v$archive_gap does’nt exists:

with prod as (select max(sequence#) as seq from v_$archived_log where RESETLOGS_TIME = (select RESETLOGS_TIME from v_$database)), stby as (select max(sequence#) as seq,dest_id dest_id from v_$archived_log where first_change# > (select resetlogs_change# from v_$database) and applied = ‘YES’ and dest_id in (1,2) group by dest_id) select prod.seq-stby.seq,stby.dest_id from prod, stby

4. Verify that managed recovery is running:

select process,status from v$managed_standby;

When managed recovery is running you will see an MRP process. If you do not see an MRP process then start managed recovery by issuing the following
command:

recover managed standby database disconnect;

Some possible statuses for the MRP are listed below:

ERROR – This means that the process has failed. See the alert log or v$dataguard_status for further information.

WAIT_FOR_LOG – Process is waiting for the archived redo log to be completed. Switch an archive log on the primary and query v$managed_standby to see if the status changes to APPLYING_LOG.

WAIT_FOR_GAP – Process is waiting for the archive gap to be resolved. Review the alert log to see if FAL_SERVER has been called to resolve the gap.

APPLYING_LOG – Process is applying the archived redo log to the standby database.à



Troubleshooting SQL Apply services
1. Verify that log apply services on the standby are currently running.

To verify that logical apply is currently available to apply changes perform the following query:

SQL> SELECT PID, TYPE, STATUS, HIGH_SCN
2> FROM V$LOGSTDBY;

When querying the V$LOGSTDBY view, pay special attention to the HIGH_SCN column. This is an activity indicator. As long as it is changing each time you query the V$LOGSTDBY view, progress is being made. The STATUS column
gives a text description of the current activity.

If the query against V$LOGSTDBY returns no rows then logical apply is not running. Start logical apply by issuing the following statement:

SQL> alter database start logical standby apply;

If the query against V$LOGSTDBY continues to return no rows then proceed to step 2.

2. To determine if there is an archive gap in your dataguard configuration;  query the DBA_LOGSTDBY_LOG view on the logical standby database.

SQL> SELECT SUBSTR(FILE_NAME,1,25) FILE_NAME, SUBSTR(SEQUENCE#,1,4) “SEQ#”,
2> FIRST_CHANGE#, NEXT_CHANGE#, TO_CHAR(TIMESTAMP, ‘HH:MI:SS’) TIMESTAMP,
3> DICT_BEGIN BEG, DICT_END END, SUBSTR(THREAD#,1,4) “THR#”
4> FROM DBA_LOGSTDBY_LOG ORDER BY SEQUENCE#;

Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE ‘/u01/oradata/arch/1_57.arc’;

After you register these logs on the logical standby database, you can restart log apply services. The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL apply operations from continuing. After resolving the identified gap and starting log apply services, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

6. Determine is logical apply is receiving errors while performing apply operations.

Log apply services cannot apply unsupported DML statements, DDL statements and Oracle supplied packages to a logical standby database in SQL apply mode. When an unsupported statement or package is encountered, SQL apply
operations stop. To determine if SQL apply has stopped due to errors you should query the DBA_LOGSTDBY_EVENTS view. When querying the view, select the columns in order by EVENT_TIME. This ordering ensures that a shutdown
failure appears last in the view. For example:

SQL> SELECT XIDUSN, XIDSLT, XIDSQN, STATUS, STATUS_CODE
2> FROM DBA_LOGSTDBY_EVENTS
3> WHERE EVENT_TIME =
4> (SELECT MAX(EVENT_TIME)
5> FROM DBA_LOGSTDBY_EVENTS);

If an error requiring database management occurred (such as adding a tablespace, datafile, or running out of space in a tablespace), then you can fix the problem manually and resume SQL apply.

If an error occurred because a SQL statement was entered incorrectly,conflicted with an existing object, or violated a constraint then enter the correct SQL statement and use the DBMS_LOGSTDBY.SKIP_TRANSACTION procedure
to ensure that the incorrect statement is ignored the next time SQL apply operations are run.

7. Query DBA_LOGSTDBY_PROGRESS to verify that log apply services is progressing.

The DBA_LOGSTDBY_PROGRESS view describes the progress of SQL apply operations on the logical standby databases. For example:

SQL> SELECT APPLIED_SCN, APPLIED_TIME, READ_SCN, READ_TIME,
2> NEWEST_SCN, NEWEST_TIME
3> FROM DBA_LOGSTDBY_PROGRESS;

The APPLIED_SCN indicates that committed transactions at or below that SCN have been applied. The NEWEST_SCN is the maximum SCN to which data could be applied if no more logs were received. This is usually the MAX(NEXT_CHANGE#)-1
from DBA_LOGSTDBY_LOG when there are no gaps in the list. When the value of NEWEST_SCN and APPLIED_SCN are the equal then all available changes have been applied. If you APPLIED_SCN is below NEWEST_SCN and is increasing then
SQL apply is currently processing changes.

8. Verify that the table that is not receiving rows is not listed in the DBA_LOGSTDBY_UNSUPPORTED.

The DBA_LOGSTDBY_USUPPORTED view lists all of the tables that contain datatypes not supported by logical standby databases in the current release. These tables are not maintained (will not have DML applied) by the logical
standby database. Query this view on the primary database to ensure that those tables necessary for critical applications are not in this list. If the primary database includes unsupported tables that are critical, consider using a physical standby database.

Wednesday, February 20, 2019

Run a SQL Tuning Advisor For A given Sql_id


When we run SQL tuning advisor against a SQL statement or sql_id, it provides tuning recommendations that can be done that query to improve performance.
 It might give suggestion to create few indexes or accepting a SQL profile.

Diagnostic and Tuning license is required to use this feature

In this below tutorial we will explain how to run sql tuning advisor against sql_Id.

Suppose the sql id is – dtj3d4das6a9a

a) Create Tuning Task

DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          sql_id      => 'dtj3d4das6a9a',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 500,
                          task_name   => 'dtj3d4das6a9a_tuning_task11',
                          description => 'Tuning task1 for statement dtj3d4das6a9a');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/

b). Executing Tuning task:
begin DBMS_SQLTUNE.execute_tuning_task(task_name => 'dtj3d4das6a9a_tuning_task11'); end;

c). Get the Tuning advisor report.
select dbms_sqltune.report_tuning_task('dtj3d4das6a9a_tuning_task11') from dual;

d). Get list of tuning task present in database:
We can get the list of tuning tasks present in database from DBA_ADVISOR_LOG

SELECT TASK_NAME, STATUS FROM DBA_ADVISOR_LOG WHERE TASK_NAME ;

e).Drop a tuning task:
execute dbms_sqltune.drop_tuning_task('dtj3d4das6a9a_tuning_task11');

What if the sql_id is not present in the cursor , but present in AWR snap?
SQL_ID =0u676p5cvfxz4

First we need to find the begin snap and end snap of the sql_id.
select a.instance_number inst_id, a.snap_id,a.plan_hash_value, to_char(begin_interval_time,'dd-mon-yy hh24:mi') btime, abs(extract(minute from (end_interval_time-begin_interval_time)) + extract(hour from (end_interval_time-begin_interval_time))*60 + extract(day from (end_interval_time-begin_interval_time))*24*60) minutes,
executions_delta executions, round(ELAPSED_TIME_delta/1000000/greatest(executions_delta,1),4) "avg duration (sec)" from dba_hist_SQLSTAT a, dba_hist_snapshot b
where sql_id='&sql_id' and a.snap_id=b.snap_id
and a.instance_number=b.instance_number
order by snap_id desc, a.instance_number;

From here we can get the begin snap and end snap of the sql_id.

begin_snap -> 870
end_snap -> 910

1. Create the tuning task:
DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          begin_snap  => 870,
                          end_snap    => 910,
                          sql_id      => '0u676p5cvfxz4',
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 60,
                          task_name   => '0u676p5cvfxz4_AWR_tuning_task',
                          description => 'Tuning task for statement 0u676p5cvfxz4  in AWR');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;
/
 2. Execute the tuning task:
 EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '0u676p5cvfxz4_AWR_tuning_task');
 3. Get the tuning task recommendation report
 SELECT DBMS_SQLTUNE.report_tuning_task('0u676p5cvfxz4_AWR_tuning_task') AS recommendations FROM dual;

Drop SQL Baselines In Oracle



1.Get the sql_handle and sql_baseline name of the sql_id:


SELECT sql_handle, plan_name FROM dba_sql_plan_baselines WHERE signature IN ( SELECT exact_matching_signature FROM gv$sql WHERE sql_id='&SQL_ID')

SQL_HANDLE                                    PLAN_NAME
--------------------------------------------- ----------------------------------------------------
SQL_a7ac813cbf25e65f                          SQL_PLAN_agb417kzkbtkz479e6372

2. Drop the baseline:

SQL> select sql_handle,plan_name from dba_sql_plan_baselines where plan_name='SQL_PLAN_agb417kzkbtkz479e6372';

SQL_HANDLE                                    PLAN_NAME
--------------------------------------------- -------------------------------------------------------------------
SQL_a7ac813cbf25e65f                          SQL_PLAN_agb417kzkbtkz479e6372


declare
drop_result pls_integer;
begin
drop_result := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
sql_handle => 'SQL_a7ac813cbf25e65f',
plan_name => 'SQL_PLAN_agb417kzkbtkz479e6372');
dbms_output.put_line(drop_result);
end;
/

PL/SQL procedure successfully completed.

SQL> select sql_handle,plan_name from dba_sql_plan_baselines where plan_name='SQL_PLAN_agb417kzkbtkz479e6372';

no rows selected

A  sql_handle can have multiple sql baselines attached, So if you want to drop all the sql baselines of that handle, then drop the sql handle without adding plan_name.


declare
 drop_result pls_integer;
 begin
 drop_result := DBMS_SPM.DROP_SQL_PLAN_BASELINE(
 sql_handle => 'SQL_a7ac813cbf25e65f');
 dbms_output.put_line(drop_result); 
 end;
/


Using SQL Plan Baseline to make optimizer to choose better execution plan without changing the Query/adding the hints.

Being a DBA, we might face a question, asking for tuning an SQL statement without changing the code or adding hints or adding/removing joins.Lets see how we can achieve this.Here, I will  demonstrate an example of forcing the optimizer to change the execution plan of a SQL statement without changing the SQL itself.

PROCEDURE-1)  I have created two tables EMPLOYEE and DEPARTMENT having EMP_ID and   DEPT_ID respectively as primary keys and I am creating an index on DEPT_ID  column in EMPLOYEE table and it is foreign key to DEPARTMENT.
PROCEDURE-2) The SQL statement in the example is from a large batch jobs running daily in an     Oracle database. During the tuning process it was found that adding a composite index to a big table can significantly improve the query performance. However this is a “popular” table in the database and is being used by many different modules and processes. To maintain the stability of the system we only want the new index being used by the tuned SQL, not any other SQL statements.To achieve this purpose we create the index as INVISIBLE so it is not used by the optimizer for any other SQL 
statement. For this SQL statement we add USE_INVISIBLE_INDEX hint so that the index is only used by the optimizer for this particular SQL. The problem is we are not allowed to change the code.Therefore adding the hint to the original SQL is not feasible. In order to force the original SQL statement to use an execution plan in which the invisible index is used, we use an Oracle database feature named SQL Plan Baseline, which was introduced in 11g. We can create plan baseline for the original SQL statement and for the one modified with the hint added. Then we replace the plan baseline of the original SQL with the one of the modified. Next time the original SQL runs the optimizer will use the execution plan from the modified with hint. Therefore the invisible index is used for this SQL.Oracle SQL Plan Management ensures that you get the desirable plan which will evolve over time as optimizer discovers better ones.


STEP-1)creating table and inserting some records
drop table department;
create table department(dept_id number primary key, dept_name char(100));
drop table EMPLOYEE;
create table EMPLOYEE(emp_id number primary key, emp_name char(100), dept_id number references department(dept_id));
create index empidx1 on EMPLOYEE(emp_id);
insert into department select rownum, 'DEPARTMENT'||rownum from all_objects;
insert into EMPLOYEE select rownum, 'dept'||rownum, dept_id from department;
update EMPLOYEE set dept_id = 500 where dept_id > 100;

STEP-2) Gather table stats
begin dbms_stats.gather_table_stats (USER, 'EMPLOYEE', cascade=> true); end;
begin dbms_stats.gather_table_stats (USER, 'department', cascade=> true); end;

STEP-3)let us have a look at the undesirable plan which does not use the index.
select emp_name, dept_name
    from EMPLOYEE c, department p
    where c.dept_id = p.dept_id
    and c.dept_id = :dept_id;

select * from table (dbms_xplan.display_cursor());

 SQL_ID  0u676p5cvfxz4, child number 0
-------------------------------------
select emp_name, dept_name     from EMPLOYEE c, department p     where
c.dept_id = p.dept_id     and c.dept_id = :dept_id

Plan hash value: 341203176

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |       |       |   479 (100)|          |
|   1 |  NESTED LOOPS                |              |    19 |  4370 |   479   (1)| 00:00:06 |
|   2 |   TABLE ACCESS BY INDEX ROWID| DEPARTMENT   |     1 |   115 |     1   (0)| 00:00:01 |
|*  3 |    INDEX UNIQUE SCAN         | SYS_C0023547 |     1 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | EMPLOYEE     |    19 |  2185 |   478   (1)| 00:00:06 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("P"."DEPT_ID"=TO_NUMBER(:DEPT_ID))
   4 - filter("C"."DEPT_ID"=TO_NUMBER(:DEPT_ID))

Note
-----
   - dynamic sampling used for this statement (level=2)
 
STEP-4) Load above undesirable plan into baseline to establish a SQL plan baseline for this query into which the desired plan will be loaded later.
 DECLARE
  n1 NUMBER;
BEGIN
  n1 := dbms_spm.load_plans_from_cursor_cache(sql_id => '0u676p5cvfxz4');
END;

 select  sql_text, sql_handle, plan_name, enabled
  from     dba_sql_plan_baselines
where lower(sql_text) like   '%emp_name%';

STEP-5)Disable undesirable plan so that this plan will not be used

declare
    cnt number;
   begin
   cnt := dbms_spm.alter_sql_plan_baseline (
    SQL_HANDLE => 'SQL_c17c9f7d83124502',
    PLAN_NAME => 'SQL_PLAN_c2z4zgq1j4j823ed2aa92',
    ATTRIBUTE_NAME => 'enabled',
    ATTRIBUTE_VALUE => 'NO');
    end;


 check enabled is NO 
select  sql_text, sql_handle, plan_name, enabled
    from     dba_sql_plan_baselines
    where lower(sql_text) like   '%emp_name%';

STEP-6)  Now we use hint in the above SQL to generate the optimal plan which uses index

select /*+ index(e)*/ emp_name, dept_name
 from EMPLOYEE e, department d
where e.dept_id = d.dept_id
    and e.dept_id = :dept_id;
 
select * from table (dbms_xplan.display_cursor());

STEP-7)Now we will load the hinted plan  into baseline, Note that we have SQL_ID and PLAN_HASH_VALUE of the hinted statement and SQL_HANDLE for the unhinted statement i.e. we are associating hinted plan with unhinted statement

   
DECLARE
cnt NUMBER;
BEGIN
  cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'dtj3d4das6a9a', plan_hash_value => 2379270125, sql_handle => 'SQL_c17c9f7d83124502');
END;


STEP-8)Verify that there are now two plans loaded for that SQL statement:
 Unhinted sub-optimal plan is disabled  Hinted optimal plan which even though is for a  “different query,”  can work with earlier unhinted query (SQL_HANDLE is same)  is enabled.

 select  sql_text, sql_handle, plan_name, enabled
    from     dba_sql_plan_baselines
    where lower(sql_text) like   '%emp_name%';


STEP-9)Verify that hinted plan is used even though we do not use hint in the query.The note confirms that baseline has been used for this statement

   select emp_name, dept_name
    from EMPLOYEE c, department p
    where c.dept_id = p.dept_id
    and c.dept_id = :dept_id; 
 
   select * from table (dbms_xplan.display_cursor());
  select * FROM table(dbms_xplan.display_cursor(format=>'typical +peeked_binds'));

 select  sql_text, sql_handle, plan_name, enabled
    from     dba_sql_plan_baselines
    where lower(sql_text) like   '%emp_name%';
 
    select plan_table_output from table(dbms_xplan.display('plan_table',null,'typical -cost -bytes'));

Summary: Using this method, you can swap  the plan for only a query which is fundamentally same i.e. you should get the desirable plan by adding hints, modifying  an optimizer setting, playing around with statistics etc. and
then associate sub-optimally performing statement with the optimal plan.


PROCEDURE-2) Generate SQL Plan Baseline for the Original SQL
In an 11g database, by default Oracle does not collect SQL plan baselines automatically unless you set init.ora parameter optimizer_capture_sql_plan_baseline to TRUE. So if the plan baseline does not exist for the original SQL statement, we need to generate it.

1) Create a SQL tuning set. Give it a name and description that suit your situation.

BEGIN
DBMS_SQLTUNE.CREATE_SQLSET(
sqlset_name => '0u676p5cvfxz4_tuning_set',
description =>  ‘Shadow Process’);
END;
/
2) If the SQL statement was run recently, get the starting and ending AWR snapshot numbers for the time period when the SQL was run. Also using the SQL ID, get the plan hash value from DBA_HIST_SQLSTAT and DBA_HIST_SNAPSHOT views。

3) Load the tuning set with the execution plan extracted from AWR, using the AWR snapshot numbers, the SQL_ID and the plan hash value.

DECLARE
cur SYS_REFCURSOR;
BEGIN
OPEN cur FOR
SELECT value(p)
FROM TABLE(DBMS_SQLTUNE.SELECT_WORKLOAD_REPOSITORY(
           begin_snap => 469,
           end_snap => 472,
           basic_filter => 'sql_id = ''0u676p5cvfxz4''
                            and plan_hash_value = 341203176')
           ) p;
DBMS_SQLTUNE.LOAD_SQLSET('0u676p5cvfxz4_tuning_set', cur);
CLOSE cur;
END;
/
4) Create SQL plan baseline from the loaded SQL tuning set

DECLARE
my_plans PLS_INTEGER;
BEGIN
my_plans := DBMS_SPM.LOAD_PLANS_FROM_SQLSET( sqlset_name =>'0u676p5cvfxz4_tuning_set');
END;
/
5) Check the newly created plan baseline

select  sql_handle, plan_name, origin, enabled, accepted, fixed, sql_text, created, last_executed
from dba_sql_plan_baselines
where created > sysdate -1/24
order by sql_handle, plan_name;
SQL_HANDLE                PLAN_NAME                      ORIGIN         ENA ACC FIX CREATED
------------------------- ------------------------------ -------------- --- --- --- -------------------
SQL_ef88a476fc38c5af      SQL_PLAN_fz254fvy3jjdgc4138c40 MANUAL-LOAD    YES YES NO  13-MAY-16 10.57.36.000000 AM
Modify SQL Statement and Generate Its Plan Baseline

1) Add USE_INVISIBLE_INDEX hint to the original SQL statement.

2) Change session parameter to catch plan baseline automatically

ALTER SESSION SET optimizer_capture_sql_plan_baselines = TRUE;
3) Catch SQL plan baseline of the modified SQL by running it twice.

4) Check the plan baseline to make sure it is caught.

select  sql_handle, plan_name, origin, enabled, accepted, fixed, sql_text, created, last_executed
from dba_sql_plan_baselines
where created > sysdate -1/24
order by sql_handle, plan_name;
SQL_HANDLE                PLAN_NAME                      ORIGIN         ENA ACC FIX CREATED
------------------------- ------------------------------ -------------- --- --- --- -------------------
SQL_a7ac813cbf25e65f      SQL_PLAN_agb417kzkbtkz479e6372 AUTO-CAPTURE   YES YES NO  13-MAY-16 11.06.22.000000 AM

SQL_ef88a476fc38c5af      SQL_PLAN_fz254fvy3jjdgc4138c40 MANUAL-LOAD    YES YES NO  13-MAY-16 10.57.36.000000 AM
5) Get SQL_ID of the modified SQL

select distinct sql_id, plan_hash_value, sql_text
from v$sql
where sql_text like ‘%USE_INVISIBLE_INDEX%';
SQL_ID        PLAN_HASH_VALUE SQL_TEXT
------------- --------------- -------------------------------------------------------------------------

dtj3d4das6a9a       544808499 SELECT /*+ USE_INVISIBLE_INDEXES INDEX (OKLS IDX_COLL_OKL_S_01) USE_NL ( XICO OKLS )...
Create an Accepted Plan Baseline for the Original SQL Using that of Modified SQL
Now we have two newly created SQL plan baselines, one for the original SQL statement and the other for the modified SQL with hint. And we know the performance of the later is much better than the former. So we want Oracle to use the execution plan from the SQL with hint (modified SQL) when the original SQL is run from the application. To achieve this, we need to create a new SQL plan baseline for the original SQL and make it ACCEPTED. Following PL/SQL block will do the task. Here SQL_ID and PLAN_HASH_VALUE are from modified SQL statement. PLAN_HANDLE is the one of the original SQL, into which the plan baseline should be implemented. Note here we also make this plan baseline FIXED, meaning the optimizer will give preference to it over non-FIXED plans.

set serveroutput on
DECLARE
v_cnt PLS_INTEGER;
BEGIN
v_cnt := DBMS_SPM.LOAD_PLANS_FROM_CURSOR_CACHE (
SQL_ID => 'dtj3d4das6a9a ',
PLAN_HASH_VALUE => 544808499,
SQL_HANDLE => ' SQL_ef88a476fc38c5af ',
FIXED => 'YES',
ENABLED => 'YES');
DBMS_OUTPUT.PUT_LINE ('Plan loaded: '||v_cnt);
END;
/
Now check the SQL plan baselines again to verify a new baseline is indeed created for the original SQL.

SQL_HANDLE                PLAN_NAME                      ORIGIN         ENA ACC FIX CREATED
------------------------- ------------------------------ -------------- --- --- --- -------------------
SQL_a7ac813cbf25e65f      SQL_PLAN_agb417kzkbtkz479e6372 AUTO-CAPTURE   YES YES NO  13-MAY-16 11.06.22.000000 AM

SQL_ef88a476fc38c5af      SQL_PLAN_fz254fvy3jjdg479e6372 MANUAL-LOAD    YES YES YES 13-MAY-16 11.15.02.000000 AM
                          SQL_PLAN_fz254fvy3jjdgc4138c40 MANUAL-LOAD    YES YES NO  13-MAY-16 10.57.36.000000 AM
Run explain plan for the original SQL statement, you should see following line included in the result:

- SQL plan baseline "SQL_PLAN_fz254fvy3jjdg479e6372" used for this statement
Summary
By using SQL Plan Baseline we can force the optimizer to use the execution plan for a modified SQL (added hint).

Sunday, February 17, 2019

Configure DB Console


PreReq:




Set environments (with example values):



export ORACLE_HOME=/oracle/11.2.0.4


export ORACLE_SID=orcl


export ORACLE_HOSTNAME=dbserver




Create the repository and configure the DB Console:


% emca -config dbcontrol db -repos create




parameters:


- database SID


- listener's port


- password for SYS


- password for SYSMAN


- password for DBSNMP




Drop the repository and deconfig the DB Console:


% emca -deconfig dbcontrol db -repos drop




parameters:


- database SID


- listener's port


- password for SYS


- password for SYSMAN




Drop the DB Console (manually):


Warning: this command puts the database in Quiesce Mode for the DB Control Releases 10.x to 11.1.x. 


Starting with DB Control Release 11.2.x, the database is no longer put in quiesce mode.


SQL> conn / as sysdba 


SQL> drop user sysman cascade;


SQL> drop role MGMT_USER;


SQL> drop user MGMT_VIEW cascade;


SQL> drop public synonym MGMT_TARGET_BLACKOUTS;


SQL> drop public synonym SETEMVIEWUSERCONTEXT;


% emca -deconfig dbcontrol db -repos drop




Link for using DBConsole: 


https://servername:port/em



Identify the blocking sessions

Below two methods helps to find the blocking sessions
Method-1:
Below query gives you the sid's which are getting effected.
1)select sid from v$lock where block=1;


2) find the sql text for this sessions.

select sql_text from v$sqltext where hash_value=( select prev_hash_value from v$session where sid='&sid');

3) check for the other detials like their sid, serail#, osuser, machine and their status ( Active / Inactive) by passing the sid from previous query of step 2.
check for all sid's from all of the results you may not get any sql text with one sid which will be active in status that is the one main culprit blocking session which is holding lock for other sessions to execute.


select sid||' - '||serial#||' - '||osuser||' - '||username||' - '||machine||' - '||status||' - '||logon_time
from v$session where sid=&sid;

Identify the holder session which is active for more confirmation you can also check the holders and waiters.

check holders & waiters:
========================
select decode(request,0,'Holder: ','Waiter: ')||sid sess, id1, id2, lmode, request, type from v$lock
where (id1, id2, type) IN (SELECT id1, id2, type from v$lock where request>0) ORDER BY id1, request;

This query results with more details the top one is the holder and others are waiters.
which is notthing but the active session which you can see with the pervous query.

5) kill the holder session

ALTER SYSTEM KILL SESSION '&sid, &serial';


Method-2:

SELECT 'alter system kill session ''' || s.sid || ',' || s.SERIAL# || ''';' a,
'ps -ef |grep LOCAL=NO|grep ' || p.SPID SPID,
'kill -9 ' || p.SPID
FROM gv$session s, gv$process p
WHERE ( (p.addr(+) = s.paddr) AND (p.inst_id(+) = s.inst_id))
AND s.sid = &sid;

you will get the result like
alter system kill session 'SID, SERIAL#';
ps -ef |grep LOCAL=NO|grep PID
kill -9 ProcessID

Connect to Oracle yum linux server

Oracle Linux 7
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-ol7.repo
Oracle Linux 6
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-ol6.repo
Oracle Linux 5
# cd /etc/yum.repos.d
# wget http://public-yum.oracle.com/public-yum-el5.repo

Get standby redo log details

select sl.group#
, sl.sequence#
, ceil(sl.bytes / 1048576) mb
, l.member
from v$standby_log sl
, v$logfile l
where sl.group# = l.group#
;

Monitor recovery progress from standby database

SELECT TO_CHAR(START_TIME,'DD-MON-YYYY HH24:MI:SS') "Recovery Start Time",
  TO_CHAR(item)
  ||' = '
  ||TO_CHAR(sofar)
  ||' '
  ||TO_CHAR(units) "Progress", comments
FROM v$recovery_progress
WHERE start_time=
  (SELECT MAX(start_time) FROM v$recovery_progress
  );
 

SQL plan monitor while query executing

STEP-1) Get sql_id from v$sql_monitor
SELECT sql_id FROM  v$sql_monitor;

STEP-2) Get the plan details
 SELECT sid,
  sql_id,
  status,
  plan_line_id,
  plan_operation,
  plan_options, 
  output_rows 
FROM v$sql_plan_monitor
--WHERE status not like 'DONE%'
 ORDER BY sid,plan_line_id;

Find the blocking sessions

SELECT
s.inst_id,
s.blocking_session,
s.sid,
s.serial#,
s.seconds_in_wait,
s.event
FROM
gv$session s
WHERE blocking_session IS NOT NULL
and s.seconds_in_wait > 15;

Saturday, February 16, 2019

Get bind variable values of the application query using sql_id

SELECT t.sql_id,
  b. last_captured,
  t.sql_text sql,
  b.hash_value,
  b.name bind_name,
  b.value_string bind_value
FROM gv$sql t,
  gv$sql_bind_capture b
WHERE t.sql_id      =b.sql_id
AND b.value_string IS NOT NULL
AND t.sql_id        ='&sql_id' --'6mz9xrh5nc007'
ORDER BY b.last_captured DESC;

Active sessions with sql's

SELECT SS.sid,
       SS.serial#,
       SUBSTR(SS.USERNAME, 1, 15) USERNAME,
       SS.OSUSER "USER",
       AR.MODULE || ' @ ' || SS.MACHINE CLIENT,
       SS.PROCESS PID,
       TO_CHAR(AR.LAST_LOAD_TIME, 'DD-Mon HH24:MM:SS') LOAD_TIME,
       AR.DISK_READS DISK_READS,
       AR.BUFFER_GETS BUFFER_GETS,
       SUBSTR(SS.LOCKWAIT, 1, 10) LOCKWAIT,
       W.EVENT EVENT,
       SS.STATUS,
       AR.SQL_fullTEXT SQL
  FROM V$SESSION_WAIT W, V$SQLAREA AR, V$SESSION SS
 WHERE SS.SQL_ADDRESS = AR.ADDRESS
   AND SS.SQL_HASH_VALUE = AR.HASH_VALUE
   AND SS.SID = W.SID(+)
   AND SS.STATUS = 'ACTIVE'
   AND W.EVENT != 'client message'
   and SS.username is not null
 ORDER BY AR.LAST_LOAD_TIME DESC,
          SS.LOCKWAIT       ASC,
          SS.USERNAME,
          AR.DISK_READS     DESC;

High cost sql_id's

select  sp.sql_id, object_owner, object_name, operation,cost,cardinality,cpu_cost,IO_COST
  from V$SQL_PLAN sp
 where /*operation = 'TABLE ACCESS'
   and options = 'FULL'
   and */object_owner not in ('SYS', 'SYSTEM', 'DBSNMP')
   and cost is not null
   order by cost desc, cpu_cost desc;

get top CPU consuming sessions with sql's

SELECT
       tmp.sid,tmp.serial#,
       program,
       cpu_usage_seconds,
       sqlarea.SQL_fullTEXT,
       DBMS_LOB.substr(sqlarea.SQL_fullTEXT, 32767) SQL
  from (select s.username,program,
               t.sid,
               s.serial#,
               s.sql_id,
               SUM(VALUE / 100) as cpu_usage_seconds
          FROM v$session s, v$sesstat t, v$statname n
         WHERE t.STATISTIC# = n.STATISTIC#
           AND NAME like '%CPU used by this session%'
           AND t.SID = s.SID
           AND s.status = 'ACTIVE'
           AND s.username is not null
         GROUP BY username,program, t.sid, s.serial#, s.sql_id) tmp,
       V$sqlarea sqlarea
 where tmp.sql_id = sqlarea.sql_id
 order by cpu_usage_seconds desc;

Run SQL Tuning Advisor For A Sql_id

STEP 1. Create Tuning Task:

DECLARE
  l_sql_tune_task_id  VARCHAR2(100);
BEGIN
  l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task (
                          sql_id      => '7n73r5q1kwxgr', ---<< use sqlid which needs to be tuned
                          scope       => DBMS_SQLTUNE.scope_comprehensive,
                          time_limit  => 500,
                          task_name   => '7n73r5q1kwxgr_tuning_task1',
                          description => 'Tuning task1 for statement 7n73r5q1kwxgr');
  DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' || l_sql_tune_task_id);
END;

STEP 2. Execute Tuning task:
EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => '7n73r5q1kwxgr_tuning_task1');

STEP 3. Get the Tuning advisor report:

select dbms_sqltune.report_tuning_task('7n73r5q1kwxgr_tuning_task1') from dual;

STEP 4. Get list of tuning task present in database:

SELECT task_id,task_name, description, advisor_name, execution_start, execution_end, status
     FROM dba_advisor_tasks
     WHERE task_name='7n73r5q1kwxgr_tuning_task1'
     ORDER BY task_id DESC;

SELECT * FROM DBA_ADVISOR_LOG WHERE task_id ='6268';

STEP 5. Drop a tuning task:
execute dbms_sqltune.drop_tuning_task('7n73r5q1kwxgr_tuning_task1');

Thursday, February 14, 2019

DATA GUARD SWITCHOVER & FAILOVER Operations

Oracle Data Guard supports two role-transition operations:

Switchover: This is done when both primary and standby databases are available.
->Planned role reversal
–>Used for OS or hardware maintenance
Failover: This is done when the primary database is NO longer available (ie in a Disaster)
–>Unplanned role reversal
–>Emergency use
–>Zero or minimal data loss (depending on choice of data-protection mode)
–>Can be initiated automatically when fast-start failover is enabled
switchover  is a planned role reversal between the primary and the standby databases. This is used when there is a planned outage on the primary database or primary server and you do not want to have extended downtime on the primary database. The switchover allows you to  switch the roles of the databases so that the standby databases now becomes a primary databases and all your users and applications can continue operations on the “new” primary database (on the standby server). During the switchover operation there is a small outage. How long the outage lasts, depends on a number of factors including the network, the number and sizes of the redo logs. The switchover operation happens on both the primary and standby database.
failover operation is what happens when the primary database is no longer available. The failover operation only happens on the standby database. The failover operation activates the standby database and turns this into a primary database. This process cannot be reversed so the decision to failover should be carefully made. The failover process is initiated during a real disaster or severe outage.
PRIMARY DATABASE SIDE:
 SQL>select switchover_status from v$database; 
SQL> alter database commit to switchover to physical standby with session shutdown;
SQL> shutdown immediate
SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter system set log_archive_dest_state_2=defer;
SQL> recover managed standby database using current logfile disconnect;
STANDBY DATABASE SIDE:
SQL>select switchover_status from v$database;
SQL> select switchover_status from v$database;
SQL> alter database commit to switchover to primary;
SQL> shutdown immediate
SQL> startup
SQL> select name, open_mode, database_role from v$database;

Switchback:To switchback, you have to follow same above mentioned steps.

Issue: After Switchback I got below error in my primary alertlog
RFS[8455]: Assigned to RFS process 32073
RFS[8455]: Database mount ID mismatch [0x94628a9b:0x946282da] (2489485979:2489483994)
RFS[8455]: Client instance is standby database instead of primary
RFS[8455]: Not using real application clusters
Solution: I forgot to run this from Actual Standby(before switchover started)
SQL>alter system set log_archive_dest_state_2=defer;
System altered.

Saturday, February 9, 2019

Latch & Lock

Latches are similar to locks but they operate on memory to protect code and internal data structures by preventing concurrent access["Latches are serialization mechanisms that protect areas of Oracle’s shared memory (the SGA)"]. For example, the LRU latches are used when managing the buffer cache, an operation that is restricted to being run by a single process; other processes must wait for the current process to complete its actions on the buffer cache before the next one in line can proceed. The latch holds this structure for the current process to access; when the current process is done the latch is released and the next process in the queue can acquire it.


              In simple terms latches prevent two processes from simultaneously updating - and possibly corrupting - the same area of the SGA.. It is low-level serialization mechanism.


              In-other way we can say, Latches are like locks for RAM memory structures to prevent concurrent access and ensure serial execution of kernel code.  The LRU (least recently used) latches are used when seeking, adding, or removing a buffer from the buffer cache, an action that can only be done by one process at a time.Contention on an LRU latch usually means that there is a RAM data block that is in high demand.  If a latch is not available a 'latch free miss' statistics is recorded.

              If the latch is already in use, Oracle can assume that it will not be in use for long, so rather than go into a passive wait (e.g., relinquish the CPU and go to sleep) Oracle will retry the operation a number of times before giving up.  This algorithm is called acquiring a spin lock and the number of “spins” before sleeping is controlled by the Oracle initialization parameter “_spin_count”.


Locks 

Protect the logical contents of the database object (table, index) from other transactions. 

Are held for the transaction duration.

Provide rollback capability for the associated transaction.


Latch

Protect the critical sections of the associated internal data structures from other threads.

Are held only until the operation completes and then are released.

Prevent concurrent access to a memory structure. 


Latch occurrence:

Oracle sessions need to update or read from the SGA for almost all database operations.  For instance:

When a session reads a block from disk, it must modify a free block in the buffer cache and adjust the buffer cache LRU (Least Recently Used) chain.

When a session reads a block from the SGA, it will modify the LRU chain.

When a new SQL statement is parsed, it will be added to the library cache within the SGA.

As modifications are made to blocks, entries are placed in the redo buffer.

 The database writer periodically writes buffers from the cache to disk (and must update their status from “dirty” to “clean”).

 The redo log writer writes entries from the redo buffer to the redo logs.

 Latches prevent any of these operations from colliding and possibly corrupting the SGA. 

                   

Root causes of Latch contention:

The latches that most frequently affect performance are those protecting the buffer cache, areas of the shared pool and the redo buffer.



Library cache and shared pool latches:  These latches protect the library cache in which shareable SQL is stored.  In a well defined application there should be little or no contention for these latches, but in an application that uses literals instead of bind variables library cache contention is common, it will be good to use soft parsing over hard to avoid library cache contention.
Cache buffers chain latches: These latches are held when sessions read or write to buffers in the buffer cache. There are typically a very large number of these latches each of which protects only a handful of blocks. Contention on these latches is typically caused by concurrent access to a very “hot” block and the most common type of such a hot block is an index root or branch block (since any index based query must access the root block).
Redo copy/redo allocation latches:  These latches protect the redo log buffer, which buffers entries made to the redo log.   These latches were a significant problem in earlier versions of Oracle, but are rarely encountered today. 

Run the following queries for Latch:


SELECT n.name, l.sleeps

  FROM v$latch l, v$latchname n

  WHERE n.latch#=l.latch# and l.sleeps > 0 order by l.sleeps;


SELECT n.name, SUM(w.p3) Sleeps

  FROM V$SESSION_WAIT w, V$LATCHNAME n

 WHERE w.event = `latch free'

   AND w.p2 = n.latch#

 GROUP BY n.name;