Oracle Runbook

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 198
At a glance
Powered by AI
The document discusses various performance tuning techniques for Oracle databases including table fragmentation, table scans, hot tables, AWR snapshots, and Exadata best practices.

The three types of IP addresses used in Oracle RAC are public IP, private IP, and virtual IP.

We can force a good plan using coe_xfr_sql_profile.sql by extracting the SQL profile from AWR/STATS and importing it using coe_xfr_sql_profile.sql.

Table of Contents

Small Table Threshold.....................................................................................................................................8


SKIP INDEX......................................................................................................................................................9
Row Movement............................................................................................................................................10
Table Fragmentation.....................................................................................................................................11
ORA-01654: unable to extend......................................................................................................................12
ORA-01591: lock held by in-doubt distributed transaction...........................................................................13
Query:Table fragmentation check................................................................................................................14
Tables based upon usage (hot tables)..........................................................................................................15
Gather table stats.........................................................................................................................................17
Table Scans...................................................................................................................................................18
TOP:10 tables by physical/disk IO.................................................................................................................19
Logical and physical reads on an individual table:........................................................................................20
AWR Snapshot..............................................................................................................................................21
Generating random AWR snapshot:.........................................................................................................21
Taking statspack snapshot:.......................................................................................................................21
QUERIES (day to day useful).........................................................................................................................22
Hot Tables (w.r.t physical reads)...............................................................................................................22
Check CACHE configured correctly...........................................................................................................22
Check which segments are in buffer pool.................................................................................................22
Cursor Leaks..............................................................................................................................................23
Rows retrieved, short scans, Long scan....................................................................................................23
Process Table scans (Avg).........................................................................................................................23
What is happening in the past 5 minutes?................................................................................................24
Exadata best practices (x2-2/x2-8)................................................................................................................25
Reading AWR report.....................................................................................................................................26
Physical reads (vs) Direct path reads............................................................................................................27
Queries.........................................................................................................................................................29
Query to get number of buffers................................................................................................................29
Dropping sql profiles.....................................................................................................................................30
SQL Plan baselines........................................................................................................................................31
Gather statistics/Analyze objects..................................................................................................................32
Table online reorg.........................................................................................................................................33
KEEP Buffer Pool...........................................................................................................................................34
Oracle Memory Structures...........................................................................................................................35
Compression in Ebiz databases.....................................................................................................................36
INI_TRANS.....................................................................................................................................................37
PCT_FREE & PCT_USED.................................................................................................................................38
Row Chaining & Migrating............................................................................................................................39
ROWID Pseudocolumn..................................................................................................................................40
Index Fragmentation....................................................................................................................................41
Index Fragmentation....................................................................................................................................43
ALTER USER...................................................................................................................................................44
Statspack creation........................................................................................................................................45
v$active_session_history..............................................................................................................................47
Resolving session performance issues..........................................................................................................48
Checking Data Access Waits.....................................................................................................................49
Forcing direct path reads to SGA..................................................................................................................51
Check what objects to CACHE in memory.....................................................................................................52
V8 bundled call.............................................................................................................................................54
Index usage...................................................................................................................................................55
AWR Warehouse...........................................................................................................................................56
Table stats.....................................................................................................................................................57
Stats for stale tables only..........................................................................................................................57
Stale stats schema wise............................................................................................................................58
Stale stats script for all stale tables:..........................................................................................................58
TM Contention wait events..........................................................................................................................59
Export and Import.........................................................................................................................................60
To take SCHEMA backup:..........................................................................................................................60
expdp par file............................................................................................................................................60
Tablespace Free check..................................................................................................................................61
Variable substitution.....................................................................................................................................62
Stats after upgrade to 11.2.0.0.4..................................................................................................................63
STATISTICS....................................................................................................................................................64
Checking auto optimizer statistics collection is running...........................................................................64
Disabling and re-enabling auto optimizer statistics..................................................................................64
Doc ID.......................................................................................................................................................64
Tables........................................................................................................................................................64
Parameters...............................................................................................................................................64
Queries.....................................................................................................................................................64
DBMS_STATS.GATHER_SCHEMA_STATS...................................................................................................65
Tables with locked statistics.....................................................................................................................65
FND_STATS (vs) GATHER STATS................................................................................................................65
E-Biz recommended patches........................................................................................................................66
Enabling Trace (Novice guide)......................................................................................................................67
Identifying Oracle version.............................................................................................................................68
Histograms....................................................................................................................................................69
method_opt..................................................................................................................................................71
Verifying CBO statistics.................................................................................................................................72
Oracle Applications 11i initialization parameters.........................................................................................73
Database Size................................................................................................................................................74
Refresh a Materialized view.........................................................................................................................75
What did change lately.................................................................................................................................76
Table and Index statistics history..................................................................................................................77
OEL/Virtual Box setup...................................................................................................................................78
Alter Index Unusable script...........................................................................................................................79
Grant EXPLAIN plan privilege........................................................................................................................80
Grant sql tuning advisor............................................................................................................................80
Check Archivelog..........................................................................................................................................81
MV Refresh using DELETE (vs) TRUNCATE....................................................................................................82
Index Monitoring : v$object_usage is empty................................................................................................83
concurrent request id to sid.........................................................................................................................84
SID to session details....................................................................................................................................85
How much undo getting generated?............................................................................................................86
Flashback query............................................................................................................................................87
BRPROD........................................................................................................................................................88
Alter table nocompress.................................................................................................................................90
Parallel Query...............................................................................................................................................91
Oracle Net Performance Tuning...................................................................................................................92
Chained rows................................................................................................................................................93
Method:2..................................................................................................................................................93
Killing SPID/process id..................................................................................................................................95
Recovery after killing a large transaction......................................................................................................96
Corrupt blocks..............................................................................................................................................97
CREATE USER................................................................................................................................................98
IMP status.....................................................................................................................................................99
Object/ table locks......................................................................................................................................100
RAC Table locks (To find out all locks).....................................................................................................100
To find out blocking locks across RAC instances:....................................................................................100
Golden Gate(GG): Replication after fliteration...........................................................................................101
Enable/Disable Archive logging mode........................................................................................................102
Enable Archive logging mode..................................................................................................................102
Disable archive logging mode.................................................................................................................102
alter database open....................................................................................................................................103
TEMP TABLESPACE USAGE..........................................................................................................................104
ADD TEMP tablespace................................................................................................................................105
Tablespace space addition..........................................................................................................................106
Drop Temp Datafile.....................................................................................................................................107
Extract tablespace definition......................................................................................................................108
Oracle Events..............................................................................................................................................109
Library Cache Lock..................................................................................................................................109
What is library cache?.........................................................................................................................109
kill a session................................................................................................................................................110
Pending statistics........................................................................................................................................111
GET_PREFS..................................................................................................................................................113
Parameters details:.................................................................................................................................113
Export Plan baseline from SVC12 to PROD12.............................................................................................115
Creating staging table.............................................................................................................................115
Export plan baseline...............................................................................................................................115
Export.....................................................................................................................................................115
Import.....................................................................................................................................................115
Import SQL plan baselines......................................................................................................................115
Create Database Link..................................................................................................................................116
Top 10 tables by size...................................................................................................................................117
Reading execution plans 12.1.....................................................................................................................118
Discoverer Workbooks................................................................................................................................119
Flush a single SQL ID from Shared Pool.......................................................................................................120
Proactive performance tuning....................................................................................................................122
Parameter Recommendations for 12c........................................................................................................123
Part1.......................................................................................................................................................123
Part2.......................................................................................................................................................123
Adaptive query optimization......................................................................................................................124
EXPLAIN PLAN from sql_id..........................................................................................................................125
Gather Index statistics................................................................................................................................126
DECODE......................................................................................................................................................127
Redo Log Switches......................................................................................................................................128
Standard.................................................................................................................................................128
Log switches by hour..............................................................................................................................128
Count and size of Redo log space...........................................................................................................128
Exact time when you get a log switch.....................................................................................................129
What is inside SQL Profile...........................................................................................................................130
List of last 500 SQL statements ran on ORACLE database...........................................................................131
Analyze index..............................................................................................................................................132
Lock modes.................................................................................................................................................133
Block changes of an Object.........................................................................................................................134
....................................................................................................................................135
High SWAP troubleshooting in AIX

pid to sid.....................................................................................................................................................136
SID to PID....................................................................................................................................................137
Remove discoverer reports from database................................................................................................138
truss output................................................................................................................................................139
LOBSEGMENT to TABLE mapping...............................................................................................................140
Purging programs.......................................................................................................................................141
GL_JE_LINES............................................................................................................................................141
OE_ORDER_LINES_HISTORY...................................................................................................................141
CSI_PERIOD_CLOSE_SUMMARY:............................................................................................................141
Finding Trace Files......................................................................................................................................142
Setting up trace...........................................................................................................................................143
Automated Segment Adviser Advice..........................................................................................................144
Recommendations (Sam Alapati)................................................................................................................145
Generate AWR baselines (Static)................................................................................................................146
ORA-4020 Deadlock while compiling PL/SQL package................................................................................147
Number of migration/chained rows...........................................................................................................149
APPEND (vs) APPEND_VALUES (vs) SYS_DL_CURSOR.................................................................................150
Conditional Index........................................................................................................................................151
Exadata Compression.................................................................................................................................152
Compare SQL Hash values for same SQL_ID..............................................................................................153
EUL5_QPP_STATS purge.............................................................................................................................154
Result set to Workbook/sheet mapping.................................................................................................155
Workbook/sheet to Result set mapping:................................................................................................155
Drop orphaned scheduled reports..........................................................................................................155
How to recover deleted discoverer workbook........................................................................................155
Last accessed date of a oracle workbook................................................................................................156
Query:.....................................................................................................................................................156
WORKBOOk (vs) Sheet (vs) Result table name...........................................................................................158
EUL5_BATCH_REPORTS..............................................................................................................................159
Final Query.................................................................................................................................................160
Query1....................................................................................................................................................160
Query2:...................................................................................................................................................160
Query3....................................................................................................................................................161
Display date with hours/minutes information............................................................................................162
Select from a partition................................................................................................................................163
Oracle table growth prediction...................................................................................................................164
Table ROWS COUNT....................................................................................................................................165
Find out expired reports.............................................................................................................................166
Find out AUTO Refresh reports...................................................................................................................167
BRR_STATE..................................................................................................................................................168
Disco clean up.............................................................................................................................................169
1. Drop tables first..............................................................................................................................169
2. Delete from EUL5_BQ_TABLES.......................................................................................................169
3. Delete from EUL5_BQ_DEPS...........................................................................................................169
4. Delete from eul5_segments............................................................................................................169
5. Delete from eul5_batch_queries....................................................................................................170
6. delete from eul5_batch_params.....................................................................................................170
7. delete from eul5_batch_sheets......................................................................................................170
8. delete from eul5_br_runs...............................................................................................................171
9. Run all.............................................................................................................................................171
Find out DISCO user sessions......................................................................................................................172
Job ID to DISCO user mapping....................................................................................................................173
DISCO sessions running over 3 hours..........................................................................................................174
LEAD function.............................................................................................................................................175
DISCO Adhoc Reports.................................................................................................................................187
Read by other session.................................................................................................................................188
Column usage.............................................................................................................................................189
Trace files....................................................................................................................................................190
Export an SQL Profile..................................................................................................................................191
Space occupied by expired/errored out discoverer reports.......................................................................192
Opening a session in VNC...........................................................................................................................193
RAC Technologies support matrix...............................................................................................................194
Public (vs) private (vs) virtual......................................................................................................................195
Using coe_xfr_sql_profile.sql to force a good plan.....................................................................................196
Small Table Threshold

In 11g, if table size falls outside _small_table_threshold and optimizer decides to go for full table scan,
then the table is read into PGA rather than SGA.

This is observed as direct path reads in 11.x onward.

_small_table_threshold parameter is set to about 2% of the size of the buffer cache.

Below query will give buffer cache size:

SQL> SELECT name,block_size,buffers FROM v$buffer_pool;

NAME BLOCK_SIZE BUFFERS


-------------------- ---------- ----------
DEFAULT 8192 5066014

_small_table_threshold = 2 % of 5066014  101,320

Blocks occupied by a specific Table:


SQL> select blocks from dba_tables where table_name = 'XXDBD_BR_CALL_ENTRY';

BLOCKS
----------
197669

Conclusion: Here 197,669 > 101,320  XXDBD_BR_CALL_ENTRY qualify for direct path reads if full table
scan is initiated on XXDBD_BR_CALL_ENTRY table by optimizer
SKIP INDEX

Skip index scans happen for composite indexes only. Composite Index is nothing but you have multiple
columns as part of an Indexed column. For example:

CREATE INDEX "XXDBD"."XXDBD_BR_CALL_ENTRY_R1" ON "XXDBD"."XXDBD_BR_CALL_ENTRY"


("WM_STATUS", "SOURCE_TYPE")

Index skip scans improve index scans by nonprefix columns. Skip scanning lets a composite index be split
logically into smaller subindexes. In skip scanning, the initial column of the composite index is not
specified in the query. In other words, it is skipped.

The number of logical subindexes is determined by the number of distinct values in the initial column. Skip
scanning is advantageous if there are few distinct values in the leading column of the composite Index and
many distinct values in the nonleading key of the Index.

Scenario:

SQL> select distinct wm_status from xxdbd.XXDBD_BR_CALL_ENTRY;

W
-

y
Row Movement
http://www.databasejournal.com/features/oracle/article.php/3676401/Row-Movement-in-Oracle.htm
Table Fragmentation
http://select-star-from.blogspot.com/2013/09/how-to-check-table-fragmentation-in.html
ORA-01654: unable to extend

SQL> alter table xxdbd.XXDBD_BR_CALL_ENTRY shrink space compact;


alter table xxdbd.XXDBD_BR_CALL_ENTRY shrink space compact
*
ERROR at line 1:
ORA-01654: unable to extend index XXDBD.XXDBD_BR_CALL_ENTRY_R1 by 1024 in tablespace
APPS_TS_TX_DATA

Resolution:
select TABLESPACE_NAME, FILE_NAME from dba_data_files where tablespace_name =
'APPS_TS_TX_DATA'

SQL> alter tablespace APPS_TS_TX_DATA add datafile


'/mnt/oragato/gatodata/fs02/apps_ts_tx_data34.dbf' size 10G autoextend on maxsize 20G;
ORA-01591: lock held by in-doubt distributed transaction

Received below error while doing table reorg online using “Alter table move”

SQL> alter table xxdbd.xxdbd_br_call_entry move;


alter table xxdbd.xxdbd_br_call_entry move
*
ERROR at line 1:
ORA-01591: lock held by in-doubt distributed transaction 45.7.1598909

Solution:
SQL> commit force '45.7.1598909';

Commit complete.

SQL> alter table xxdbd.xxdbd_br_call_entry move;

Table altered.
Query:Table fragmentation check
select table_name,round((blocks*8),2)/1024/1024 "size (GB)" ,
round((num_rows*avg_row_len/1024),2)/1024/1024 "actual_data (GB)",
(round((blocks*8),2) - round((num_rows*avg_row_len/1024),2))/1024/1024
"wasted_space (GB)",
(((round((blocks*8),2) -
round((num_rows*avg_row_len/1024),2))/1024/1024)/(round((blocks*8),2)/1024/1024))*100
"Wasted_Per"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
-- AND round((blocks*8),2)/1024/1024 > 1
and table_name = 'JTF_TASKS_B'
order by 2 asc

Query: Order by wasted space percentage descending (Table size > 2GB)
select table_name,round((blocks*8),2)/1024/1024 "size (GB)" ,
round((num_rows*avg_row_len/1024),2)/1024/1024 "actual_data (GB)",
(round((blocks*8),2) - round((num_rows*avg_row_len/1024),2))/1024/1024
"wasted_space (GB)",
(((round((blocks*8),2) -
round((num_rows*avg_row_len/1024),2))/1024/1024)/(round((blocks*8),2)/1024/1024))*100
"Wasted_Per"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
AND round((blocks*8),2)/1024/1024 >= 2
order by 5 desc
Tables based upon usage (hot tables)
drop table temp1;

create global temporary table temp1(t1 char(100), t2 integer);

INSERT INTO temp1 SELECT object_name, count(*) FROM v$sql_plan WHERE operation = 'TABLE ACCESS'
GROUP BY object_name ORDER BY COUNT (*) DESC;

select table_name,t2 table_stats, round((blocks*8),2)/1024/1024 "size (GB)" ,


round((num_rows*avg_row_len/1024),2)/1024/1024 "actual_data (GB)",
(round((blocks*8),2) - round((num_rows*avg_row_len/1024),2))/1024/1024
"wasted_space (GB)",
(((round((blocks*8),2) -
round((num_rows*avg_row_len/1024),2))/1024/1024)/(round((blocks*8),2)/1024/1024))*100
"Wasted_Per"
from dba_tables t1, temp1 t2
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
AND t1.table_name = TRIM(t2.t1)
ORDER BY 2 desc;

drop table temp1;

***
drop table temp1;

create global temporary table temp1(t1 char(100), t2 integer);

INSERT INTO temp1 SELECT object_name, count(*) FROM v$sql_plan WHERE operation = 'TABLE ACCESS'
GROUP BY object_name ORDER BY COUNT (*) DESC;

select table_name,t2 table_stats, round((blocks*8/1024/1024),6) "size (GB)" ,


round((num_rows*avg_row_len/1024/1024/1024),6) "actual_data (GB)",
round((round((blocks*8),6) - round((num_rows*avg_row_len/1024),2))/1024/1024,6)
"wasted_space (GB)",
round((((round((blocks*8),6) -
round((num_rows*avg_row_len/1024),6))/1024/1024)/(round((blocks*8),6)/1024/1024))*100,6)
"Wasted_Per"
from dba_tables t1, temp1 t2
where (round((blocks*8),6) > round((num_rows*avg_row_len/1024),6))
--AND round((blocks*8/1024/1024),4) >= 0.5
AND t1.table_name = TRIM(t2.t1)
ORDER BY 2 desc;

***Hot tables details in MB ***


drop table temp1;

create global temporary table temp1(t1 char(100), t2 integer);

INSERT INTO temp1 SELECT object_name, count(*) FROM v$sql_plan WHERE operation = 'TABLE ACCESS'
GROUP BY object_name ORDER BY COUNT (*) DESC;

select table_name,t2 table_stats, round((blocks*8/1024),6) "size (MB)" ,


round((num_rows*avg_row_len/1024/1024),6) "actual_data (MB)",
round((round((blocks*8),6) - round((num_rows*avg_row_len/1024),2))/1024,6)
"wasted_space (MB)",
round((((round((blocks*8),6) -
round((num_rows*avg_row_len/1024),6))/1024)/(round((blocks*8),6)/1024))*100,6) "Wasted_Per"
from dba_tables t1, temp1 t2
where (round((blocks*8),6) > round((num_rows*avg_row_len/1024),6))
--AND round((blocks*8/1024/1024),4) >= 0.5
AND t1.table_name = TRIM(t2.t1)
ORDER BY 2 desc;
Gather table stats

EXEC dbms_stats.gather_table_stats('JTF','JTF_TASKS_B', degree=>10,


cascade=>TRUE, estimate_percent=>100,granularity=>'ALL');
Table Scans
table fetch by rowid typically comes from an index range scan. We get a rowid and read a
single row from the table 

table fetch continued row is a row that was fetched (be it by rowid or scanning) that either
didn't fit on a block or in the case of a rowid fetch - has migrated. It does not MEAN the row
was migrated, it could be that the row is just too large to fit on a single block. 

table scan blocks/rows gotten - self defining, just counters of rows/blocks retrieved during
full scans 

direct read scans are full scans that bypass the buffer cache (much like parallel query
always used to to - but in 11g parallel query might not do anymore - it depends). We read
directly from disk into your pga to process the data, the buffer cache is deemed far too
inefficient. 

table scans long tables - big tables that were full scanned and placed into the buffer cache
in a manner so as to not fill up the buffer cache. Typically as you are scanning the table -
you are using a very very small set of blocks in the buffer cache over and over - so the
table basically overwrites itself in the cache. This prevents a 5gb full table scan from
obliterating a 4gb db cache size. That 5gb scan would only use a very small section of the
buffer cache. 

table scans rowid ranges - self describing, scanning a portion of a table based on rowid
ranges. 

table scans short tables - table is deemed small enough to fit comfortably in the buffer
cache so unlike the long table full table scan - it does not overwrite itself in the cache. 

the method for determining a small table is constantly being revised - so the process is a
little different from the 7.x days but the net result is the same. Consider any table greater
than 5% the size of the cache to be a large table an anything less is probably small.
TOP:10 tables by physical/disk IO

By Physical IO:
select owner,segment_name,object_type,total_physical_reads
from ( select owner,object_name as segment_name,object_type,
value as total_physical_reads
from v$segment_statistics
where statistic_name in ('physical reads')
order by total_physical_reads desc)
where rownum <11;

By Logical IO:
select owner,segment_name,object_type,total_logical_reads
from ( select owner,object_name as segment_name,object_type,
value as total_logical_reads
from v$segment_statistics
where statistic_name in ('logical reads')
order by total_logical_reads desc)
where rownum <11;
Logical and physical reads on an individual table:

select
to_char(begin_interval_time,'yy/mm/dd/hh24') c1,
logical_reads_total c2,
physical_reads_total c3, table_scans_total, table_scans_delta
from
dba_hist_seg_stat s,
dba_hist_seg_stat_obj o,
dba_hist_snapshot sn
where
o.owner = 'AR'
and
s.obj# = o.obj#
and
sn.snap_id = s.snap_id
and
object_name = 'RA_INTERFACE_LINES_ALL'
order by
begin_interval_time desc;
AWR Snapshot
Generating random AWR snapshot:
EXEC dbms_workload_repository.create_snapshot;

Taking statspack snapshot:


EXEC statspack.snap;
QUERIES (day to day useful)
Hot Tables (w.r.t physical reads)

drop table temp_rk;


create global temporary table temp_rk(object_name char(50), object_type char(50), total_physical_reads
NUMBER);
INSERT INTO temp_rk select object_name as segment_name,object_type, value as total_physical_reads
from v$segment_statistics
where statistic_name in ('physical reads') AND object_type = 'TABLE'
order by total_physical_reads desc;

select table_name,round(round((blocks*8),2)/1024/1024,5) "size (GB)" ,


round(round((num_rows*avg_row_len/1024),2)/1024/1024,5) "actual_data (GB)",
round((round((blocks*8),2) - round((num_rows*avg_row_len/1024),2))/1024/1024,5)
"wasted_space (GB)",
round((((round((blocks*8),2) -
round((num_rows*avg_row_len/1024),2))/1024/1024)/(round((blocks*8),2)/1024/1024))*100,5)
"Wasted_Per", total_physical_reads
from dba_tables, temp_rk
where
(round((blocks*8),2) > round((num_rows*avg_row_len/1024),2)) AND
table_name=rtrim(temp_rk.OBJECT_NAME)
order by total_physical_reads desc;

drop table temp_rk;

Check CACHE configured correctly


SELECT SIZE_FOR_ESTIMATE, BUFFERS_FOR_ESTIMATE, ESTD_PHYSICAL_READ_FACTOR,
ESTD_PHYSICAL_READS
FROM V$DB_CACHE_ADVICE
WHERE NAME = 'DEFAULT'
AND BLOCK_SIZE = (SELECT VALUE FROM V$PARAMETER WHERE NAME = 'db_block_size')
AND ADVICE_STATUS = 'ON';

Check which segments are in buffer pool


COLUMN OBJECT_NAME FORMAT A40
COLUMN NUMBER_OF_BLOCKS FORMAT 999,999,999,999

SELECT o.OBJECT_NAME, COUNT(*) NUMBER_OF_BLOCKS


FROM DBA_OBJECTS o, V$BH bh
WHERE o.DATA_OBJECT_ID = bh.OBJD
AND o.OWNER != 'SYS'
GROUP BY o.OBJECT_NAME
ORDER BY COUNT(*) desc;

Cursor Leaks
select oc.sid,oc.hash_value,oc.sql_text,count(*)
from v$open_cursor oc,
v$session ss
where
oc.sid = ss.sid and
oc.sql_text not like '%COMMIT%'
group by oc.sid,oc.hash_value,oc.sql_text
having count(*) > 5
order by 4;

Metalink: 248857.1

Rows retrieved, short scans, Long scan


The below query will give details of current running queries:

select ss.username||'('||se.sid||') ' "User Process",


sum(decode(name,'table scans (short tables)',value)) "Short Scans",
sum(decode(name,'table scans (long tables)', value)) "Long Scans",
sum(decode(name,'table scan rows gotten',value)) "Rows Retreived"
from v$session ss,
v$sesstat se,
v$statname sn
where se.statistic# = sn.statistic#
and (name like '%table scans (short tables)%'
or name like '%table scans (long tables)%'
or name like '%table scan rows gotten%')
and se.sid = ss.sid
and ss.username is not null
group by ss.username||'('||se.sid||') '
order by 2 desc;

Process Table scans (Avg)

 User Process - Name of user process


 Short Scans - Number of short scans (<= 5 blocks)
 Long Scans - Number of long scans (> 5 blocks)
 Rows Retrieved - Number of rows retrieved
 Long Scans Length - Average long scan length (i.e. full table scan of > 5 blocks)
select ss.username||'('||se.sid||') ' "User Process",
sum(decode(name,'table scans (short tables)',value)) "Short Scans",
sum(decode(name,'table scans (long tables)', value)) "Long Scans",
sum(decode(name,'table scan rows gotten',value)) "Rows Retreived",
round((sum(decode(name,'table scan rows gotten',value)) -
(sum(decode(name,'table scans (short tables)',value)) * 5)) /
(sum(decode(name,'table scans (long tables)', value))),2) "Long Scans Length"
from v$session ss,
v$sesstat se,
v$statname sn
where se.statistic# = sn.statistic#
and (name like '%table scans (short tables)%'
or name like '%table scans (long tables)%'
or name like '%table scan rows gotten%')
and se.sid = ss.sid
and ss.username is not null
group by ss.username||'('||se.sid||') '
having sum(decode(name,'table scans (long tables)', value)) != 0
order by 3 desc

What is happening in the past 5 minutes?


SELECT NVL(a.event, 'ON CPU') AS event,
COUNT(*) AS total_wait_time
FROM v$active_session_history a
WHERE a.sample_time > SYSDATE - 5/(24*60) -- 5 mins
GROUP BY a.event
ORDER BY total_wait_time DESC;

Ref: http://oracle-base.com/articles/10g/active-session-history.php
Exadata best practices (x2-2/x2-8)

Doc ID 757552.1
Reading AWR report

Oracle time-series report, STATSPACK and AWR, is originally created exclusively for use by Oracle
Technical Support Services.

AWR: you need to buy this


STATSPACK: It’s free

Tool: Ion For Oracle


http://www.ion-dba.com/

ASMM/AMM: Automatic shared memory management

By enabling AMM, you allow Oracle to dynamically change the sizes of the major SGA pools such as the
shared_pool_size, db_32k_cache_size, db_16k_cache_size, large_pool_size and so on.

Please note AMM is reactive tool that changes the SGA after detecting that the database has experienced
degradation. An experienced DBA can do much better job than AMM!

AMM has been enhanced in 11g, but it is still primarily designed for small databases where it is not
feasible to proactively optimize the major RAM pools.

OLTP Workload:
Transactions/second > 1
db file sequential reads wait in the three top-5 timed events is greater than 10 reads/second

Ref: http://troype.com/?blog5

http://www.slideshare.net/karlarao/mining-the-awr-repository-for-capacity-planning-and-visualization-
white-p
Physical reads (vs) Direct path reads

The following SQL gives the required details:


--- Hourly based comparison between Physical reads and Direct Reads
select
s1.h_date,
trunc(s1.v_avg,2) pyh_reads,
trunc(s2.v_avg,2) dir_reads,
trunc(s1.mb_sec,2) pyh_mb_s,
trunc(s2.mb_sec,2) dir_mb_s,
trunc((s2.v_avg/s1.v_avg)*100,2) R_PCT
from
--S1-B-----------
(
select
trunc(b_snap_date,'HH') h_date,
sum(snap_value) svalue,
sum(snap_value/snap_secs) v_avg,
sum(snap_value/snap_secs)*v_db_block_size/1024/1024 mb_sec
from
(select
s.INSTANCE_NUMBER,
cast (s.END_INTERVAL_TIME as date) e_snap_date,
cast (s.BEGIN_INTERVAL_TIME as date) b_snap_date,
(cast(s.END_INTERVAL_TIME as date) - cast(s.BEGIN_INTERVAL_TIME as date))*24*60*60 snap_secs,
t.VALUE,
(t.VALUE-LAG (t.VALUE) OVER (ORDER BY s.INSTANCE_NUMBER, s.BEGIN_INTERVAL_TIME)) snap_value
from
DBA_HIST_SNAPSHOT s,
DBA_HIST_SYSSTAT t
where 1=1
and s.SNAP_ID = t.SNAP_ID
and s.DBID = t.DBID
and s.INSTANCE_NUMBER = t.INSTANCE_NUMBER
and s.DBID = (select DBID from V$DATABASE)
and t.STAT_NAME = 'physical reads'
) pr,
(select VALUE v_db_block_size from v$parameter where name = 'db_block_size')
where snap_value > 0
group by trunc(b_snap_date,'HH'),v_db_block_size
) S1,
--S2-B-----------
(
select
trunc(b_snap_date,'HH') h_date,
sum(snap_value) svalue,
sum(snap_value/snap_secs) v_avg,
sum(snap_value/snap_secs)*v_db_block_size/1024/1024 mb_sec
from
(select
s.INSTANCE_NUMBER,
cast (s.END_INTERVAL_TIME as date) e_snap_date,
cast (s.BEGIN_INTERVAL_TIME as date) b_snap_date,
(cast(s.END_INTERVAL_TIME as date) - cast(s.BEGIN_INTERVAL_TIME as date))*24*60*60 snap_secs,
t.VALUE,
(t.VALUE-LAG (t.VALUE) OVER (ORDER BY s.INSTANCE_NUMBER, s.BEGIN_INTERVAL_TIME)) snap_value
from
DBA_HIST_SNAPSHOT s,
DBA_HIST_SYSSTAT t
where 1=1
and s.SNAP_ID = t.SNAP_ID
and s.DBID = t.DBID
and s.INSTANCE_NUMBER = t.INSTANCE_NUMBER
and s.DBID = (select DBID from V$DATABASE)
and t.STAT_NAME = 'physical reads direct'
) pr,
(select VALUE v_db_block_size from v$parameter where name = 'db_block_size')
where snap_value > 0
group by trunc(b_snap_date,'HH'),v_db_block_size
) S2
--S2-E-----------
where 1=1
and s1.h_date = s2.h_date (+)
order by
s1.h_date desc;

Note: R_PCT: How much percentage of total physical reads are direct path reads
Queries

Query to get number of buffers


SELECT NAME, BLOCK_SIZE, SUM(BUFFERS)
FROM V$BUFFER_POOL
GROUP BY NAME, BLOCK_SIZE
HAVING SUM(BUFFERS) > 0;
Dropping sql profiles

select * from dba_sql_profiles order by created desc;

exec dbms_sqltune.drop_sql_profile('SYS_SQLPROF_014d5317f7dd0002');

exec dbms_sqltune.alter_sql_profile('SYS_SQLPROF_014fb30a089a0009','STATUS','DISABLED');
exec dbms_sqltune.alter_sql_profile('SYS_SQLPROF_014fb30a089a0009','STATUS',ENABLED);
SQL Plan baselines

https://docs.oracle.com/database/121/TGSQL/tgsql_spm.htm#TGSQL658

To Drop an SQL Plan profile:

Step1: Get SQL_HANDLE


SELECT /*+ PARALLEL */ SQL_HANDLE, SQL_TEXT, PLAN_NAME, ORIGIN,
ENABLED, ACCEPTED
FROM DBA_SQL_PLAN_BASELINES
--WHERE SQL_TEXT LIKE '%AP_INVOICES_ALL%';
WHERE PLAN_NAME like '%SQL_PLAN_4fr1bqy1z7yhq1e6a02c3%'

Step2: Drop using SQL_HANDLE


DECLARE
v_dropped_plans number;
BEGIN
v_dropped_plans := DBMS_SPM.DROP_SQL_PLAN_BASELINE (
sql_handle => ' SQL_f7e63dda43f1e927'
);
DBMS_OUTPUT.PUT_LINE('dropped ' || v_dropped_plans || ' plans');
END;
/
Gather statistics/Analyze objects

Exec dbms_stats.gather_table_stats('JTF', 'JTF_TASKS_B', cascade=>TRUE,degree=>10);


Table online reorg

alter table cs.cs_incidents_all_b enable row movement;


alter table cs.cs_incidents_all_b shrink space compact;
alter table cs.cs_incidents_all_b shrink space;
alter table cs.cs_incidents_all_b disable row movement;
KEEP Buffer Pool

Alter system set DB_KEEP_CACHE_SIZE= 2560M

Alter system flush buffer_cache;  During outages is an ideal time.

Alter table jtf.jtf_tasks_b storage (buffer_pool keep);

To check which objects are in KEEP pool:


select * from dba_segments where BUFFER_POOL = 'KEEP';

BI Apps:
alter table xla.xla_distribution_links modify partition AR storage (buffer_pool keep);
alter table xla.xla_distribution_links modify partition AP storage (buffer_pool keep);

alter table AR.RA_CUSTOMER_TRX_LINES_ALL storage (buffer_pool keep);

alter table XLA.XLA_AE_LINES modify partition AR storage (buffer_pool keep);


alter table XLA.XLA_AE_LINES modify partition AP storage (buffer_pool keep);
Oracle Memory Structures
Compression in Ebiz databases

Compression in Oracle Ebiz comes up with two options:

 Compress for direct load operations (Suitable for Data Warehousing)


 Compress for all operations (Suitable for OLTP, including regular DML)

For example below are steps:

(1) Take DDL back up of table using sql developer


(2) Run statistics and calculate size of the table
Run statistics: Exec dbms_stats.gather_table_stats('JTF', 'JTF_TASKS_B',
cascade=>TRUE,degree=>10);

Table Size: select (blocks*8)/1024/1024 GB from dba_tables where table_name = 'JTF_TASKS_B'

(3) Check current compression statistics


select owner,table_name, compression, compress_for from dba_tables where table_name =
'JTF_TASKS_B'

(4) ALTER TABLE jtf.jtf_tasks_b compress for all operations parallel 20

(5) Alter table jtf.jtf_tasks_b move

(6) Please note that it’s advisable to rebuild indexes after compression.
alter index "JTF"."JTF_TASKS_B_N7" rebuild online compress parallel 20;

(7) Update statistics once again & compare the table size before and after

Note: http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#CNCPT1132
INI_TRANS
Each block has a block header.

Each block header has a transaction table.

Whenever a transaction locks rows/elements in a block, the respective transaction details are noted in
Transaction Table.

The initial size of the transaction table is specified by the INI_TRANS at object level. For tables, this value
defaults to 1 (indexes defaults to 2).

The transaction table will grow dynamically as needed upto MAX_TRANS


PCT_FREE & PCT_USED
Row Chaining & Migrating

Scenario:1 Row Chaining


The data for a row in a table may be too large to fit into a single data block.
The row is too large to fit into a single block when it is first inserted itself because of LONG/LONG RAW
columns.

In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that
segment.

Row chaining in cases of LONG/LONG RAW cases is unavoidable.

Scenario:2 Row migrating


Initially, row fits in one block.

Because someone updated, row size increased, but there is no free size in the block to hold the updated
row => Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in
a new block.

Oracle preserves the original row piece of migrated row to point to the new block containing the migrated
row. The rowid of a migrated row does not change.

When a row is chained or migrated, I/O performance associated with this row decreases because Oracle
must scan more than one data block to retrieve the information for the row.
ROWID Pseudocolumn

ROWID Pseudocolumn returns address of the row. This address is combination of:

 Object number (Each object has it’s own unique object number in a database)
 Tablespace/datafile reference
 Datablock reference in a datafile
 The position of row in a datablock

Rowid values have several important uses:

 They are THE fastest way to access a single row


 They can show you how the rows in a table are stored
 They are unique identifiers for rows in a table

Please note: You shouldn’t use rowid values as primary keys.


Index Fragmentation
/* ************************************************************* */
/* Index Fragmentation Status (idsfrag.sql): */
/* */
/* This script will report the index fragmentation status */
/* for a schema. */
/* */
/* Note: - Do not run this scrip during peak processing hours!!! */
/* - This script will fail for locked tables. */
/* */
/* ************************************************************* */

prompt -- Drop and create temporary table to hold stats...


drop table my_index_stats
/
create table my_index_stats (
index_name varchar2(30),
height number(8),
del_lf_rows number(8),
distinct_keys number(8),
rows_per_key number(10,2),
blks_gets_per_access number(10,2)
)
/

prompt -- Save script which we will later use to populate the above table...
insert into my_index_stats
select NAME, HEIGHT, DEL_LF_ROWS, DISTINCT_KEYS, ROWS_PER_KEY,
BLKS_GETS_PER_ACCESS
from INDEX_STATS
-- Note this open line...

save /tmp/save_index_stats.sql replace

prompt
prompt -- Spool listing with validate commands...
col line1 newline
col line2 newline
col line3 newline
set pagesize 0
set echo off
set termout off
set trimspool on
set feed off
set linesize 200
spool /tmp/validate_indexes.sql
select 'prompt Process table '||owner||'.'||table_name||
', index '||index_name||'...' line1,
'validate index '||owner||'.'||index_name||';' line2,
'@/tmp/save_index_stats.sql' line3
from sys.dba_indexes where owner = 'SCOTT'
order by table_name, index_name
/
spool off
set termout on
set feed on

prompt
prompt -- Run script to validate indexes...
@/tmp/validate_indexes.sql

prompt -- Print nice report...


set pagesize 50000
set trimspool on
col height format 99999
col del_rows format 9999999
col rows/key format 999999.9
spool idxfrag.lst
select INDEX_NAME, HEIGHT, DEL_LF_ROWS "DEL_ROWS", DISTINCT_KEYS "DIST KEYS",
ROWS_PER_KEY "ROWS/KEY",
BLKS_GETS_PER_ACCESS "BLKS/ACCESS"
from MY_INDEX_STATS
/
spool off

-- Cleanup
drop table my_index_stats
/
! rm /tmp/validate_indexes.sql
! rm /tmp/save_index_stats.sql

prompt
prompt Report is in idxfrag.lst
prompt Done!!!
Index Fragmentation

analyze index jtf.JTF_TASKS_B_U1 validate structure;


Note: You can analyze only one Index at a time. After “analyze index”, it will create a table: index_stats

select * from index_stats;

select (del_lf_rows_len/lf_rows_len) * 100 from index_stats;

If deleted entries are more than i.e 15% of current entries then a rebuild would be recommended.

We may gain performance improvement after rebuild the index.

Rebuilding an index can either be done by using:

alter index <index_name> rebuild;


or
dropping and recreating the index.
ALTER USER

SQL> alter user perfstat identified by "password";

User altered.
Statspack creation

sqlplus / as sysdba

@spdrop.sql

@spcreate.sql

[gato]-->sqlplus perfstat/password

SQL*Plus: Release 11.2.0.3.0 Production on Thu May 28 08:30:54 2015

Copyright (c) 1982, 2011, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> execute statspack.snap(i_snap_level=>7);

PL/SQL procedure successfully completed.

SQL> exit

Note:
The best way to check the benefits of setting up a keep pool are (for your version of Oracle) to run
statspack at level 7 (the default is level 5) and check the segment statistics section of the reports to see if
there are any objects which are responsible for a large amount of physical I/O that would give you a lot of
benefit if you could "KEEP" them, without sacrificing a lot of memory that might cause other objects to
start doing more physical I/O.

Levels >= 7 Additional data: Segment Level Statistics

This level includes all statistics gathered in the lower levels, and additionally gathers
the performance data on highly used segments. RAC specific segment level statistics
are also captured with level 7.

A level 7 snapshot gathers information which determines what segments are more
heavily accessed and contended. With this information, you can decide to modify the
physical layout of some segments or of the tablespaces they reside in. For example, to
better spread the segment IO load, you can add files residing on different disks to a
tablespace storing a heavily accessed segment or you can partition a segment. This
information can also help decide on changing segment attributes values such
as PCTFREE or INITRANS. On a RAC environment, this information allows us to easily
spot the segments responsible for much of the cross-instance traffic.

Level 7 includes the following segment statistics:

 Logical reads
 Db block changes
 Physical reads
 Physical writes
 Physical reads direct
 Physical writes direct
 Global cache consistent read blocks served (RAC specific)
 Global cache current blocks served (RAC specific)
 Buffer busy waits
 ITL waits
 Row lock waits

Although Statspack captures all segment statistics, it reports only the following
statistics that exceed one of the predefined threshold parameters:

 Number of logical reads on the segment. The default is 10,000.


 Number of physical reads on the segment. The default is 1,000.
 Number of buffer busy waits on the segment. The default is 100.
 Number of row lock waits on the segment. The default is 100.
 Number of ITL waits on the segment. The default is 100.
 Number of global cache consistent read blocks served (RAC only). The default is
1,000.
 Number of global cache current blocks served (RAC only). The default is 1,000.

The values of the threshold parameters are used when deciding which segment statistics
to collect. If a segment's statistic exceeds a threshold value, all statistics regarding this
segment are captured during the snapshot.The threshold levels used are either those
stored in the table stats$statspack_parameter, or by the thresholds specified when the
snapshot is taken.
v$active_session_history

To find out users simultaneously submitting something: For example, timesheets submission

select sample_time, count(sample_time)


from v$active_session_history
where
action = 'HXC/DBD_SELF_SERVICE_TIME_US' or module in
('e:HXC:fwk:hxc.selfservice.timecard.server.Timeca', 'e:HXC:wf:HXCEMP',
'e:HXC:bes:oracle.apps.wf.notification.send')
group by sample_time
order by sample_time asc;
-- order by count(sample_time) desc;

Related Tables: v$event_name, dba_hist_active_sess_history, dba_hist_snapshot,


dba_hist_resource_limit
Resolving session performance issues

Ref: http://www.oracle.com/technetwork/issue-archive/2012/12-jul/o42dba-1566567.html

Step1: Find out the session ID. You can get session ID using user id.

Step2: if you don’t have session ID, you can get list of sessions waiting using below query:
***
select sid,username,status,
decode(state, 'WAITING','Waiting',
'Working') state,
decode(state,
'WAITING',
'So far '||seconds_in_wait,
'Last waited '||
wait_time/100)||
' secs for '||event
"Description",
blocking_session, blocking_instance, row_wait_obj#, row_wait_file#, row_wait_block#,
row_wait_row#,
program, osuser, machine, terminal, service_name, logon_time, last_call_et
from v$session
where
--username = 'APPS' AND
EVENT like '%lock%'
order by seconds_in_wait desc;
***

Note: Check in which state the session is in.


In Oracle, a session can be in one of the below states:
 idle
 processing
 waiting

Note2: SQL*Net message from client ==> Idle. There is no specific state that
says, a session is Idle.

“rdbms ipc message” ==> This also means, Idle

Step:3
Find out the SQL query that is waiting for
set long 99999
select sql_fulltext
from v$sql
where sql_id = (select sql_id from v$session where sid = 6902)
Step:4
Take blocking_session and get the query on which it is getting blocked
set long 99999
select sql_fulltext
from v$sql
where sql_id = (select sql_id from v$session where sid = 6872)

Step:5
You can get row on which a specific session waiting using below 2 queries:

select row_wait_obj#,
row_wait_file#,
row_wait_block#,
row_wait_row#
from v$session
where sid = 6902;

select owner, object_type, object_name, data_object_id


from dba_objects
where object_id = 188675

select *
from &1..&2
where rowid =
dbms_rowid.rowid_create (
rowid_type => 1,
object_number => &3,
relative_fno => &4,
block_number => &5,
row_number => &6
)

Note: last_call_et from v$session tells when the last SQL submitted when a session is in waiting state.
i.e., how long in seconds since the session submitted a query.

Checking Data Access Waits

Checking data access waits

 
select SID, state, event, p1, p2
from v$session
where username = 'ARUP';

SID STATE EVENT P1 P2


———— ——————— ——————————————————————— —— ————
2201 WAITING db file sequential read 5 3011
 

The P1 column shows the file ID, and the P2 column shows the block ID. From that information in the result in Listing 7,
you can get the segment name from the extent information in DBA_EXTENTS, as shown below:

 
select owner, segment_name
from dba_extents
where file_id = 5
and 3011 between block_id
and block_id + blocks;

OWNER SEGMENT_NAME
—————— —————————————
ARUP T1
 

This shows that the T1 table, owned by ARUP, is being selected from by the disk in the session. You should direct your
attention to this table for tuning. You can move the table to a high-speed disk for faster I/O, or, alternatively, you can
focus on making I/O in this table faster by making changes that affect this table, such as creating new indexes, creating
materialized views, or building a result cache.

Ref: http://www.oracle.com/technetwork/issue-archive/2012/12-nov/o62dba-1741123.html
Ref: http://www.oracle.com/technetwork/issue-archive/2013/13-jan/o13dba-1871177.html
Forcing direct path reads to SGA

Query: Missing blocks in SGA


SELECT o.OBJECT_NAME, T1.last_analyzed, T1.blocks ACTUAL_BLOCKS, COUNT(*)
NUMBER_OF_BLOCKS_IN_SGA, COUNT(*) - T1.blocks MISSING
FROM DBA_OBJECTS o, V$BH bh, dba_tables T1
WHERE o.DATA_OBJECT_ID = bh.OBJD
AND o.OWNER != 'SYS'
AND o.object_name IN ('JTF_TASKS_B', 'XXDBD_BR_CALL_ENTRY')
AND o.object_name = T1.table_name
AND o.owner = T1.owner
GROUP BY o.OBJECT_NAME, T1.blocks, t1.last_analyzed
ORDER BY COUNT(*) desc;

XXDBD_BR_CS_GEO_EVENTS:
alter session set "_serial_direct_read"=never;
select 'REGISTRO MAIS ANTIGO: ' || 'Task_Number: '|| Task_Number || ' - ' || 'Geo_Evento_Id: ' ||
Geo_Event_Id || ' - pendente de validação desde: ' || to_CHAR(CREATED_DATE, 'DD/MM/RRRR
HH24:MI:SS') from xxdbd.xxdbd_br_cs_geo_events
where geo_event_id = (select MIN(geo_event_id) from xxdbd.xxdbd_br_cs_geo_events where STATUS =
'NEW' AND CREATED_DATE < SYSDATE - 0.25/24)
;

XXDBD_BR_CALL_ENTRY:
alter session set "_serial_direct_read"=never;
SELECT t1.CALL_ENTRY_ID, t1.SOURCE_ID, t1.STATUS, t1.SR_NUMBER, t1.TASK_NUMBER, t1.LAST_VALIDATION_RESULT,
t1.CALL_ENTRY_TYPE, t1.CUSTOMER_CALL_NUMBER, t1.CUSTOMER_CODE FROM XXDBD.XXDBD_BR_CALL_ENTRY t1
WHERE ( t1.WM_STATUS is null or t1.WM_STATUS <> 'y' ) AND (t1.STATUS <> 'NOVO' AND t1.SOURCE_TYPE =
'GEN.DBD_CALL_ENTRY');

JTF_TASKS_B:
alter session set "_serial_direct_read"=never;
select count(*) FROM(
SELECT /*+ ALL_ROWS */ MAX(TASK_ID), SOURCE_OBJECT_ID
FROM JTF.JTF_TASKS_B WHERE TASK_ID IS NOT NULL GROUP BY SOURCE_OBJECT_ID
);
Check what objects to CACHE in memory

SELECT T1.SNO, T2.OWNER, T2.TABLE_NAME, T2.NUM_ROWS, T2.BLOCKS*8/1024 TABLE_SIZE_MB,


T1.count_object TABLE_SCANS_COUNT,
T3.value total_physical_reads
FROM
(select rownum SNO, object_name, count_object
FROM
(
select object_name, count(object_name) count_object from v$sql_plan where operation = 'TABLE ACCESS'
group by object_name
order by 2 desc
) )T1, dba_tables T2 , v$segment_statistics T3
WHERE
T1.object_name = T2.table_name AND
T1.object_name = T3.object_name AND
T3.statistic_name in ('physical reads')
ORDER BY 6 DESC
;

Note: ordered by ‘TABLE ACCESS’ descending. We can add size restriction something like <=100

Query: <=200 & order by ‘Table Scans”

SELECT T1.SNO, T2.OWNER, T2.TABLE_NAME, T2.NUM_ROWS, T2.BLOCKS*8/1024 TABLE_SIZE_MB,


T1.count_object TABLE_SCANS_COUNT,
T3.value total_physical_reads
FROM
(select rownum SNO, object_name, count_object
FROM
(
select object_name, count(object_name) count_object from v$sql_plan where operation = 'TABLE ACCESS'
group by object_name
order by 2 desc
) )T1, dba_tables T2 , v$segment_statistics T3
WHERE
T1.object_name = T2.table_name AND
T1.object_name = T3.object_name AND
T3.statistic_name in ('physical reads')
AND T2.BLOCKS*8/1024 <= 200
-- ORDER BY 7 desc
-- ORDER BY T1.SNO
ORDER BY 6 desc

Latest (Use this): Instead of TABLE SCAN, considering all other references of an object in V$SQL_PLAN
SELECT T1.SNO, T2.OWNER, T2.TABLE_NAME, T2.NUM_ROWS, T2.BLOCKS*8/1024 TABLE_SIZE_MB,
T1.count_object SQL_PLAN_COUNT,
T3.value total_physical_reads
FROM
(select rownum SNO, object_name, count_object
FROM
(
select object_name, count(object_name) count_object from v$sql_plan
where object_name IS NOT NULL AND object_type = 'TABLE'
-- operation = 'TABLE ACCESS'
and object_owner not in ('SYS', 'SYSTEM')
group by object_name
order by 2 desc
) )T1, dba_tables T2 , v$segment_statistics T3
WHERE
T1.object_name = T2.table_name AND
T1.object_name = T3.object_name AND
T3.statistic_name in ('physical reads')
-- AND T2.OWNER = 'APPLSYS'
-- AND T2.BLOCKS*8/1024 <= 200
-- ORDER BY 7 desc
-- ORDER BY T1.SNO
ORDER BY 6 desc
;

select count(*) from v$sql_plan where object_name = 'CSF_DEBRIEF_HEADERS_V2' AND OPERATION =


'VIEW' ;
V8 bundled call

http://blog.tanelpoder.com/2011/08/23/v8-bundled-exec-call-and-oracle-program-interface-opi-calls/
Index usage

alter index indexname monitoring usage;

select * from v$object_usage;

alter index indexname nomonitoring usage;


AWR Warehouse

Ref: http://docs.oracle.com/cd/E24628_01/server.121/e55047/tdppt_awr_warehouse.htm#TDPPT145
Table stats

exec dbms_stats.gather_table_stats('OKC','OKC_K_LINES_B',degree => 10, cascade =>


true,method_opt => 'for all columns size auto', no_invalidate=> false);

***Works nice

BEGIN
FOR x IN (SELECT *
FROM dba_tables
WHERE owner = 'SOME_SCHEMA'
AND table_name NOT IN (<<list of MVs>>))
LOOP
dbms_stats.gather_table_stats( x.owner, x.table_name, ... );
END LOOP;
END;

Schema Level Stats:

select 'exec dbms_stats.gather_table_stats(''' || OWNER || ''',''' || table_name || ''',' ||


'degree=>10, cascade => true, method_opt => ''for all columns size auto'', no_invalidate=> false);'
from dba_tables where owner = 'XXDBD_ODS_ORA' and TABLE_NAME NOT LIKE ‘%_O%’;

Notes: From Oracle 11g onwards, it is better to leave ESTIMATE_PERCENT at its default value.

Also it is recommended to leave METHOD_OPT at its default value.

DEGREE: Better to set it to a number for large tables.


Granularity: For partitioned tables
CASCADE: By default index statistics are gathered for only tables which are stale.

DBMS_STATS.GET_PREFS  To find out default preferences

Stats for stale tables only


select distinct 'exec dbms_stats.gather_table_stats(''' || t1.OWNER || ''',''' || t1.table_name || ''',' ||
'degree=>10, cascade => true, method_opt => ''for all columns size auto'', no_invalidate=> false);'
from dba_tables t1 LEFT JOIN dba_tab_statistics t2 on t1.owner = t2.owner and t1.table_name =
t2.table_name where t1.owner = 'XXDBD_DW' and t2.stale_stats != 'NO'

Stale stats schema wise


select owner , count(1) from dba_tab_statistics where stale_stats != 'NO' AND OWNER NOT IN ('SYS',
'SYSTEM') group by owner order by 2 desc

Stale stats script for all stale tables:


select distinct 'exec dbms_stats.gather_table_stats(''' || t1.OWNER || ''',''' || t1.table_name || ''',' ||
'degree=>4, cascade => true, method_opt => ''for all columns size auto'', no_invalidate=> false);' from
dba_tables t1 LEFT JOIN dba_tab_statistics t2 on t1.owner = t2.owner and t1.table_name = t2.table_name
where t1.owner NOT IN ('SYS', 'SYSTEM', 'GG') and t2.stale_stats != 'NO'
TM Contention wait events

http://logicalread.solarwinds.com/solving-oracle-enq-tm-contention-waits-dr01/#.VZDgo_mqqko
Export and Import

Cd /mnt/oracle_stage_NAS/tuning/dumps/gbip

exp \'/ as sysdba\' tables=XXDBD_DW.XXDBD_DW_FACT_SERV_REV file=export.dmp

imp \'/ as sysdba\' file=/mnt/oracle_stage_NAS/tuning/dumps/gbip/export.log


TABLES=XXDBD_DW_FACT_SERV_REV FROMUSER=XXDBD_DW touser=XXDBD_DW

Ignore=y To import into an existing table

imp \'/ as sysdba\' file=/mnt/oracle_stage_NAS/tuning/dumps/gbip/export.log


TABLES=XXDBD_DW_FACT_SERV_REV FROMUSER=XXDBD_DW touser=XXDBD_DW IGNORE=y

To take SCHEMA backup:

CREATE DIRECTORY DUMP_FILES1 AS '/usr/apps/dumpfiles1';

GRANT READ, WRITE ON DIRECTORY dpump_dir TO APPS;

expdp system/<password> OWNERS=APPS DIRECTORY=EUL5_APPS_DUMP DUMPFILE=APPS.dmp


LOGFILE=APPS.log

expdp par file


INCLUDE=TABLE:"IN (SELECT table_name FROM xxdba_eul)"

parallel=4

directory=DATA_PUMP_DIR

dumpfile=exp_prod.dmp

logfile=exp_prod.log
Tablespace Free check

set linesize 150


column tablespace_name format a20 heading 'Tablespace'
column sumb format 999,999,999
column extents format 9999
column bytes format 999,999,999,999
column largest format 999,999,999,999
column Tot_Size format 999,999 Heading 'Total| Size(Mb)'
column Tot_Free format 999,999,999 heading 'Total Free(MB)'
column Pct_Free format 999.99 heading '% Free'
column Chunks_Free format 9999 heading 'No Of Ext.'
column Max_Free format 999,999,999 heading 'Max Free(Kb)'
set echo off
PROMPT FREE SPACE AVAILABLE IN TABLESPACES
select a.tablespace_name,sum(a.tots/1048576) Tot_Size,
sum(a.sumb/1048576) Tot_Free,
sum(a.sumb)*100/sum(a.tots) Pct_Free,
sum(a.largest/1024) Max_Free,sum(a.chunks) Chunks_Free
from
(
select tablespace_name,0 tots,sum(bytes) sumb,
max(bytes) largest,count(*) chunks
from dba_free_space a
group by tablespace_name
union
select tablespace_name,sum(bytes) tots,0,0,0 from
dba_data_files
group by tablespace_name) a
group by a.tablespace_name
order by pct_free;
Variable substitution

Set define off


Stats after upgrade to 11.2.0.0.4

After upgrade to 11.2.0.4, immediately do the following in the exact sequence. Run in VNC as it may take a
few hours

exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_DATAbase_stats( gather_sys=>true);
exec dbms_stats.gather_dictionary_stats;

While the database is live and under regular load, do the following:
exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

Reference: How to Gather Statistics on Objects Owned by the 'SYS' User and 'Fixed' Objects (Doc ID
457926.1)
STATISTICS
Checking auto optimizer statistics collection is running
select client_name, status from dba_autotask_client where client_name like '%stats%'

Disabling and re-enabling auto optimizer statistics

EXEC DBMS_AUTO_TASK_ADMIN.disable

EXEC DBMS_AUTO_TASK_ADMIN.enable

Doc ID
1968577.1

Tables
DBA_AUTOTASK_CLIENT
DBA_AUTOTASK_JOB_HISTORY
DBA_AUTOTASK_WINDOW HISTORY
DBA_SCHEDULER_JOB_RUN_DETAILS
DBA_AUTOTASK_TASK

Parameters
_optimizer_autostats_job

Queries
select client_name,status from dba_autotask_client where client_name like
'%stats%';

SELECT window_name,job_name, job_status, job_duration,JOB_START_TIME


FROM DBA_AUTOTASK_JOB_HISTORY
WHERE client_name='auto optimizer stats collection';

select * from dba_autotask_window_history;

select
session_id,LOG_DATE,REQ_START_DATE,ACTUAL_START_DATE,STATUS,RUN_DURATION,ADDITIONAL_INF
O
from DBA_SCHEDULER_JOB_RUN_DETAILS where JOB_NAME like 'ORA$AT_OS_OPT_SY_%' order by
ACTUAL_START_DATE desc;
SELECT a.ksppinm as "Parameter", a.ksppdesc as "Description", b.ksppstvl as
"Session Value", c.ksppstvl FROM x$ksppi a, x$ksppcv b, x$ksppsv c WHERE
a.indx = b.indx AND a.indx = c.indx
AND a.ksppinm LIKE '%optimizer_autostats_job%' ;

DBMS_STATS.GATHER_SCHEMA_STATS
It is important to note that using DBMS_STATS.GATHER_SCHEMA_STATS on an E-Business suite database
is not supported.

Doc Id: 375351.1

Tables with locked statistics


Select DISTINCT owner, table_name, stattype_locked from dba_tab_statistics where stattype_locked is not
null;

To unlock: dbms_stats.unlock_schema_stats

exec dbms_stats.unlock_schema_stats (‘AR’);

exec dbms_stats.unlock_table_stats(‘AR’, ‘AR_REV_REC_QT’);

FND_STATS (vs) GATHER STATS

Gather table statistics: This concurrent request is for standard tables. This uses FND_STATS package to
facilitate collection of these statistics

Its recommended to use fnd_stats over dbms_stats for Apps environments.


E-Biz recommended patches
Enabling Trace (Novice guide)

Doc Id: 117129.1


Identifying Oracle version
select * from v$version;
Histograms

drop table customer_rk;


create table customer_RK(cid number, cname char(5), cloc char(10));
BEGIN
FOR v_Count IN 1..440000 LOOP
INSERT INTO customer_RK VALUES (v_count, 'Ravi', 'Hyderabad');
END LOOP;
END;
/
BEGIN
FOR v_Count IN 440001..490000 LOOP
INSERT INTO customer_RK VALUES (v_count, 'Mike', 'Canton');
END LOOP;
END;
/
BEGIN
FOR v_Count IN 490001..500000 LOOP
INSERT INTO customer_RK VALUES (v_count, 'Ron', 'Rio');
END LOOP;
END;
/

select * from dba_tab_histograms where table_name = 'CUSTOMER_RK'; ==> No Rows found

EXEC dbms_stats.gather_table_stats(user, 'CUSTOMER_RK');


select * from dba_tab_histograms where table_name = 'CUSTOMER_RK'; ==> Now this shows histograms information

##Run some sample queries on the table


select * from customer_rk where cid = 100;
select * from customer_rk where cname = 'Ravi';
select * from customer_rk where cloc = 'India';

select * from dba_tab_histograms where table_name = 'CUSTOMER_RK'; ==> You will see same
histograms data as previous

select column_name, count(*) from user_tab_histograms where table_name = 'CUSTOMER_RK' group by


column_name; ==> Column wise grouping of histograms

EXEC dbms_stats.gather_table_stats(user, 'CUSTOMER_RK'); ==> This will change histograms based upon
sample queries on customer_rk table

select column_name, count(*) from user_tab_histograms where table_name = 'CUSTOMER_RK' group by


column_name; ==> Column wise grouping of histograms
Note: Oracle uses size auto to gather column stats. I.e., it looks at predicates you have used and the data
in the columns and figures out what histograms to gather.

If you don’t want that, you can just get endpoints (one bucket - high and low values and number of values
- this is useful stuff generally) by doing this:

exec dbms_stats.gather_table_stats( user, 'CUSTOMER_RK', method_opt=>'for all columns size 1' );

select column_name, count(*) from user_tab_histograms where table_name = 'CUSTOMER_RK' group by


column_name;  This will get 2 histograms for each column. One for minimum and another one for
maximum.

select * from user_tab_histograms where table_name = 'CUSTOMER_RK'

Now, if you really don’t want column information, you can do below:

exec dbms_stats.delete_table_stats( user, 'CUSTOMER_RK' );


exec dbms_stats.gather_table_stats( user, 'T', method_opt=>'for columns ' );
select column_name, count(*) from user_tab_histograms where table_name = 'CUSTOMER_RK' group by
column_name;  No Rows found
method_opt

https://blogs.oracle.com/optimizer/entry/how_does_the_method_opt
Verifying CBO statistics
Doc ID 163208.1

https://support.oracle.com/epmos/faces/DocumentDisplay?
_afrLoop=360124379002405&parent=DOCUMENT&sourceId=744143.1&id=163208.1&_afrWindowMode
=0&_adf.ctrl-state=pecfzhfi9_145
Oracle Applications 11i initialization parameters

Doc ID 216205.1
Database Size

select (select sum(bytes)/1024/1024/1024/1024 from dba_data_files)+


(select sum(bytes)/1024/1024/1024/1024 from dba_temp_files) "Size in TB" from dual;
Refresh a Materialized view

EXEC DBMS_SNAPSHOT.REFRESH(‘MV NAME’, ‘f’);


What did change lately

select * from dba_objects where OWNER NOT IN ('SYS', 'SYSTEM') and owner = 'XXDBD_DW'
order by last_ddl_time desc;
Table and Index statistics history
SELECT ob.owner, ob.object_name, ob.object_type, rowcnt, avgrln ,samplesize, analyzetime
FROM sys.WRI$_OPTSTAT_TAB_HISTORY, dba_objects ob
WHERE owner=upper('&OWNER')
and object_name=upper('&TABLE')
and object_type in ('TABLE')
and object_id=obj#
order by analyzetime desc;

works
OEL/Virtual Box setup

https://technology.amis.nl/2013/06/29/how-to-create-a-virtual-box-vm-with-oracle-enterprise-linux-
inside-a-dummy-guide/
Alter Index Unusable script
select 'ALTER INDEX ' || owner || '.' || index_name || ' unusable;' from dba_indexes
where owner ='XXDBD_OBAW'
and TABLE_NAME = 'W_PURCH_ORDER_F'
and index_type = 'BITMAP'
;
Grant EXPLAIN plan privilege
grant insert on plan_table to appsro;

grant ADMINISTER SQL MANAGEMENT OBJECT to dbsnmp;  To grant sql baseline implementation

Grant sql tuning advisor


grant advisor to appsro;
Check Archivelog

SQL> select log_mode from v$database;

LOG_MODE
------------
NOARCHIVELOG
MV Refresh using DELETE (vs) TRUNCATE

/*+ APPEND */  Truncate + INSERT

If atomic_refresh = false, then TRUNCATE + INSERT

If atomic_refresh = true, then DELETE + INSERT


Index Monitoring : v$object_usage is empty

Ref: http://hrivera99.blogspot.in/2011/04/vobjectusage-view-empty.html

create or replace view V$ALL_OBJECT_USAGE


(OWNER
,INDEX_NAME
,TABLE_NAME
,MONITORING
,USED
,START_MONITORING
,END_MONITORING
)
as
select u.name
, io.name
, t.name
, decode(bitand(i.flags, 65536), 0, 'NO', 'YES')
, decode(bitand(ou.flags, 1), 0, 'NO', 'YES')
, ou.start_monitoring
, ou.end_monitoring
from
sys.user$ u
, sys.obj$ io
, sys.obj$ t
, sys.ind$ i
, sys.object_usage ou
where
i.obj# = ou.obj#
and io.obj# = ou.obj#
and t.obj# = i.bo#
and u.user# = io.owner#
/

select * from v$all_object_usage where used = 'YES';


concurrent request id to sid

SELECT ses.sid, ses.serial#


FROM v$session ses, v$process pro
WHERE ses.paddr = pro.addr
AND pro.spid IN (SELECT oracle_process_id FROM fnd_concurrent_requests WHERE request_id =
'&request_id');
SID to session details
select sid,serial#,a.sql_id,a.SQL_TEXT,S.USERNAME,i.host_name,machine,S.event,S.seconds_in_wait
sec_wait,
to_char(logon_time,'DD-MON-RR HH24:MI') login
from gv$session S,gV$SQLAREA A,gv$instance i
where S.username is not null
-- and S.status='ACTIVE'
AND S.sql_address=A.address
and s.inst_id=a.inst_id and i.inst_id = a.inst_id
and sql_text not like 'select S.USERNAME,S.seconds_in_wait%' and sid = 3268

select * from v$active_session_history where session_id = 7506


How much undo getting generated?

alter session set nls_date_format='dd-mm-yy hh24:mi:ss';


select begin_time, end_time,
(undoblks * (select value from v$parameter where
name='db_block_size'))
undo_bytes from v$undostat;

Guaranteed undo retention, meaning that undo data will never be overwritten until the time specified by
the undo retention has passed, is enabled at the tablespace level.
Flashback query

alter session set nls_date_format='dd-mm-yy hh24:mi:ss';

insert into scott.emp


(select * from scott.emp as of timestamp (systimestamp – 1/24)
minus
select * from scott.emp);
BRPROD

alter session set "_serial_direct_read"=never;


SELECT COUNT(*) FROM
(
SELECT /*+ ALL_ROWS */ TASK_ID COL1,
INCIDENT_ID COL2,
PARTY_NAME COL3,
PARTY_NUMBER COL4,
TASK_TYPE COL5,
TASK_STATUS COL6,
TASK_NUMBER COL7,
TASK_NAME COL8,
INCIDENT_NUMBER COL9,
SEVERITY COL10,
DESCRIPTION COL11,
RESPOND_BY COL12,
RESOLVE_BY COL13,
PRODUCT COL14,
PRODUCT_VERIFICATION COL15,
SERIAL_NUMBER COL16,
SERVICE COL17,
SERIALNO_VERIFICATION COL18,
TECH_REG_NAME COL19,
SITE_TELEFONE COL20,
CREATION_DATE COL21,
INCIDENT_DATE COL22,
BRANCH_NUMBER COL23,
BRANCH_NAME COL24,
CAT_NUMBER COL25,
CAT_NAME COL26,
OPERATION_RECURRENCE COL27,
RANK_RECURRENCE COL28,
ESTIMATE_NO COL29,
SITE_TYPE COL30,
TECH_REG_NUMBER COL31,
CUST_SITE_CODE COL32,
SR_CREATED_BY COL33,
MTBF_FATURA COL34,
TAG COL35,
CONTRACT_NUMBER COL36,
SR_PROB_SUMMARY COL37,
CUST_SITE_NAME COL38,
SR_STATUS COL39,
INCIDENT_TYPE COL40,
CUSTOMER_TICKET_NUMBER COL41,
TECHNICIAN_NUMBER COL42,
TECHNICIAN_NAME COL43,
ASSIGNED_DATE COL44,
PARENT_TASK_NUMBER COL45,
SITE_ADDRESS COL46,
CNPJ_SITE COL47,
CONTACT COL48,
SCHEDULED_START_DATE COL49,
SCHEDULED_END_DATE COL50,
VANTIVE_CALL_NUMBER COL51,
TASK_PRIORITY COL52
FROM APPS.XXDBD_DISPATCHER_SEARCH_V
WHERE 1 =1
AND UPPER(TO_CHAR(BRANCH_NUMBER)) IN ('49', '99')
AND UPPER(TO_CHAR(TASK_TYPE)) = 'Naveen'
AND UPPER(TO_CHAR(TASK_STATUS)) = 'Ravi'
ORDER BY COL22 DESC
)
commit;
Alter table nocompress

alter table "XXDBD_OBAW"."W_PURCH_RCPT_A_TMP" nocompress;

alter table "XXDBD_OBAW"."W_PURCH_RCPT_A_TMP" nologging;


Parallel Query

By, Arup Nanda:


http://www.oracle.com/technetwork/issue-archive/2010/o40parallel-092275.html

How parallel execution works, by Oracle:


https://docs.oracle.com/cd/E11882_01/server.112/e25523/parallel002.htm#BEIGDHCH
Oracle Net Performance Tuning

Oracle Net Performance Tuning (Doc ID 67983.1)


Chained rows

select * from
(
select T1.owner, T1.table_name, T1.NUM_ROWS, T1.BLOCKS, T1.ACTUAL_BLOCKS, T1.RATIO,
Count(T2.table_name) number_of_columns
FROM (
select owner, table_name, NUM_ROWS, blocks, avg_row_len, round
((NUM_ROWS*AVG_ROW_LEN*1.16)/8096,2) actual_blocks,
round(blocks/(NUM_ROWS*AVG_ROW_LEN*1.16/8096),2) ratio
from dba_tables
where owner NOT IN ( 'SYS', 'SYSMAN')
-- and table_name = 'STG_TISFC001133_RAVI'
and blocks > 1000 AND NUM_ROWS > 0 AND AVG_ROW_LEN > 0
--order by ((NUM_ROWS*AVG_ROW_LEN*1.16)/8/1024 - blocks) desc
order by 7 desc
) T1, dba_tab_columns T2
where T1.table_name = T2.table_name and T1.owner = T2.owner
--and T1.table_name = 'XXDBD_DW_DIM_SERVICE_CONTRACT'
-- ORDER BY 6 desc
group by T1.owner, T1.table_name, T1.NUM_ROWS, T1.BLOCKS, T1.ACTUAL_BLOCKS, T1.RATIO
)
order by ratio desc;

analyze table "XXDBD_OBAW"."WC_SALES_PROF_HISTORY_F" list chained rows into chained_rows;

create table XXDBD_DW.CHAINED_ROWS (


owner_name varchar2(30),
table_name varchar2(30),
cluster_name varchar2(30),
partition_name varchar2(30),
subpartition_name varchar2(30),
head_rowid rowid,
analyze_timestamp date);

select 'analyze table ' || OWNER || '.' || TABLE_NAME || ' list chained rows into
xxdbd_dw_stg.chained_rows;'
from dba_tables where owner = 'XXDBD_OBAW';

Method:2
analyze table inv.mtl_txn_request_headers compute statistics;
Query:

select owner, chain_cnt, round(chain_cnt/num_rows*100,2) chain_pct,


avg_row_len, pct_free
from dba_tables where owner = ‘INV’ and table_name = ‘MTL_TXN_REQUEST_HEADERS’;
Killing SPID/process id
Problem: Sometimes after killing session id from OEM, the UPDATE statement still runs with status as
KILLED. We need to kill backend process ID as well:

select
pid,spid
from
v$process
where
ADDR in (select paddr from v$session where SID in(select sid from v$session where status='KILLED'));
Recovery after killing a large transaction

http://www.runningoracle.com/product_info.php?products_id=330

smon, fast start parallel rollback:

https://community.oracle.com/thread/469479?start=0&tstart=0

Note 464246.1 - DATABASE HANG DUE TO PARALLEL TRANSACTION RECOVERY


Doc ID 238507.1
Corrupt blocks

BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'OBAW_DIM');
END;
/

BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'ORPHAN_KEY_TABLE',
TABLE_TYPE => dbms_repair.orphan_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'OBAW_DIM');
END;
/

SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'XXDBD_OBAW',
OBJECT_NAME => 'W_PRODUCT_XACT_F',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/

Ref: https://docs.oracle.com/cd/B10501_01/server.920/a96521/repair.htm

http://www.oracle.com/technetwork/database/availability/maa-datacorruption-bestpractices-396464.pdf
CREATE USER

SQL> create user XXDBD_RK identified by password QUOTA UNLIMITED ON XXDBD;

User created.
IMP status

set verify off


col table_name for a30

select substr(sql_text,instr(sql_text,' INTO '),30) table_name,


rows_processed,
round((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60,1) minutes,
trunc(rows_processed/((sysdate-to_date(first_load_time,'yyyy-mm-dd hh24:mi:ss'))*24*60))
rows_per_min
from v$sqlarea
where (ADDRESS,HASH_VALUE) in (select sql_address,sql_hash_value from v$session)
and command_type = 2
and open_versions > 0;
Object/ table locks

column oracle_username format a15


column os_user_name format a15
column object_name format a37
column object_type format a37
set linesize 300
select a.session_id,a.oracle_username, a.os_user_name, b.owner "OBJECT OWNER",
b.object_name,b.object_type,a.locked_mode from
(select object_id, SESSION_ID, ORACLE_USERNAME, OS_USER_NAME, LOCKED_MODE from
v$locked_object) a,
(select object_id, owner, object_name,object_type from dba_objects) b
where a.object_id=b.object_id
;

add serial id, process id as well to above query?

RAC Table locks (To find out all locks)

select c.inst_id,c.sid,c.serial#  
from
gv$locked_object a,
dba_objects b,
gv$session c,
gv$lock d
where a.object_id=b.object_id and a.session_id=c.sid and c.sid=d.sid and b.object_name =
'&TABLE_NAME';

To find out blocking locks across RAC instances:


select distinct s1.username || '@' || s1.machine || ' ( INST=' || s1.inst_id || ' SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( INST=' || s1.inst_id || ' SID=' || s2.sid || ' ) ' AS blocking_status
from gv$lock l1, gv$session s1, gv$lock l2, gv$session s2
where s1.sid=l1.sid and s2.sid=l2.sid and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1 and l2.id2 = l2.id2 and l1.inst_id = s1.inst_id;
Golden Gate(GG): Replication after fliteration

http://www.dba-oracle.com/t_goldengate_data_selection_filtering.htm
Enable/Disable Archive logging mode

Enable Archive logging mode


sqlplus / as sysdba

archive log list

show parameter recovery_file_dest

By default, archive logs will be written to the flash recovery area. If you do not want to write archive logs to the flash
recovery area you can set the parameter LOG_ARCHIVE_DEST_n to the location in which you wish to write archive
logs.

alter system set log_archive_dest_1='LOCATION=/u02/app/oracle/oradata/orcl/arch' scope =


both;

shutdown immediate

startup mount

alter database archivelog;

alter database open;

archive log list

alter system switch logfile

Disable archive logging mode


shutdown immediate

startup mount

alter database noarchivelog

alter database open


alter database open

sqlplus / as sysdba

startup mount
alter database backup controlfile to trace;

show parameter back


SQL> show parameter back

NAME TYPE VALUE


------------------------------------ ----------- ------------------------------
background_core_dump string partial
background_dump_dest string /mnt/orarpt12/rpt12db/admin/di
ag/rdbms/rpt12/rpt12/trace

Now, got to background_dump_dest directory and check for alert_<SID>.log file

if you tail the above log file, it writes the trace file.

Open trace file and make sure all datafiles are present.

Once all datafiles are present:


alter database open
TEMP TABLESPACE USAGE

SELECT TABLESPACE_NAME, TABLESPACE_SIZE/1024/1024/1024 TB_SIZE_GB,


ALLOCATED_SPACE/1024/1024/1024 ALLOC_GB, FREE_SPACE/1024/1024/1024 FREE_GB
FROM dba_temp_free_space
ADD TEMP tablespace

alter tablespace TEMP2 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_11.dbf' size 30G ;


alter tablespace TEMP2 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_12.dbf' size 30G ;
alter tablespace TEMP2 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_13.dbf' size 30G ;
alter tablespace TEMP2 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_14.dbf' size 30G ;
alter tablespace TEMP2 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_15.dbf' size 30G ;

alter tablespace TEMP1 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp1_12.dbf' size 30G ;


alter tablespace TEMP1 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp1_13.dbf' size 30G ;
alter tablespace TEMP1 add tempfile '/mnt/orarpt12/rpt12temp/fs01/temp1_14.dbf' size 30G ;
commit;
Tablespace space addition

ALTER TABLESPACE "XXDBD" ADD DATAFILE '/mnt/oragbi3/gbi3data/fs01/GBI3/
xxdbd_22.dbf' SIZE 30G
Drop Temp Datafile

alter tablespace TEMP2 drop tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_15.dbf';

alter tablespace TEMP2 drop tempfile '/mnt/orarpt12/rpt12temp/fs01/temp2_15.dbf'

You need to drop the temp tablespace.

Create a new temp tablespace.


make it the default temporary tablespace
drop the old temp tablespace
drop tablespace temp including contents and datafiles
rename the tablespace.

ALTER DATABASE TEMPFILE '/mnt/orarpt12/rpt12temp/fs01/temp2_15.dbf' DROP INCLUDING DATAFILES;

CREATE TEMPORARY TABLESPACE "TEMP2" TEMPFILE


'/mnt/orarpt12/rpt14temp/fs01/temp04.dbf' SIZE 32212254720,
'/mnt/orarpt12/rpt12temp/fs01/temp2_2.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_3.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_4.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_5.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_6.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_7.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_8.dbf' SIZE 20971520000,
'/mnt/orarpt12/rpt12data/fs01/temp2_9.dbf' SIZE 21474836480,
'/mnt/orarpt12/rpt12data/fs01/temp2_10' SIZE 10485760000,
'/mnt/orarpt12/rpt12temp/fs01/temp2_11.dbf' SIZE 32212254720,
'/mnt/orarpt12/rpt12temp/fs01/temp2_12.dbf' SIZE 32212254720,
'/mnt/orarpt12/rpt12temp/fs01/temp2_13.dbf' SIZE 32212254720,
'/mnt/orarpt12/rpt12temp/fs01/temp2_14.dbf' SIZE 32212254720,
'/mnt/orarpt12/rpt12temp/fs01/temp02.dbf' SIZE 20971520000
TABLESPACE GROUP TEMP
EXTENT MANAGEMENT LOCAL UNIFORM SIZE 1048576
;
Extract tablespace definition

select dbms_metadata.get_ddl('TABLESPACE',tb.tablespace_name) from dba_tablespaces tb;


Oracle Events
Library Cache Lock
This event controls the concurrency between clients of the library cache. It acquires a lock on the object
handle so that either

 One client can prevent other clients from accessing the same object
 The client can maintain a dependency for a long time which does not allow another client to
change the object

This lock is also maintained to locate an object in the library cache.

Ref: http://docs.oracle.com/cd/B19306_01/server.102/b14211/instance_tune.htm#g58652

What is library cache?


 Its piece of memory within the SGA that Oracle uses in order to store SQL statements
 Whenever a process issues an SQL Statement, the text of the statement goes
kill a session

alter system kill session ‘sid, serial#’


Pending statistics

dba_tab_pending_stats

Ref: http://gavinsoorma.com/2009/09/11g-pending-and-published-statistics/

SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name =


'OKC_K_LINES_B';

OWNER TABLE_NAME LAST_ANALYZED


------------------------------ ------------------------------ ---------------
OKC OKC_K_LINES_B 01-APR-16

SQL> exec dbms_stats.gather_table_stats('OKC','OKC_K_LINES_B',degree => 10, cascade =>


true,method_opt => 'for all columns size auto', no_invalidate=> false);

PL/SQL procedure successfully completed.

SQL> select OWNER, TABLE_NAME, LAST_ANALYZED from dba_tab_statistics where table_name =


'OKC_K_LINES_B';

OWNER TABLE_NAME LAST_ANALYZED


------------------------------ ------------------------------ ---------------
OKC OKC_K_LINES_B 01-APR-16

SQL> select owner, table_name, LAST_ANALYZED from dba_tab_pending_stats where table_name =


'OKC_K_LINES_B';

OWNER TABLE_NAME LAST_ANALYZED


------------------------------ ------------------------------ ---------------
OKC OKC_K_LINES_B 27-APR-16

SQL> exec DBMS_STATS.PUBLISH_PENDING_STATS('OKC', 'OKC_K_LINES_B');

PL/SQL procedure successfully completed.

BEGIN

DBMS_STATS.DELETE_PENDING_STATS('sh','sales');

END;

/
Note: OPTIMIZER_USE_PENDING_STATISTICS 
GET_PREFS

This function returns the default value of the specified preference.

Syntax:

DBMS_STATS.GET_PREFS (

pname IN VARCHAR2,

ownname IN VARCHAR2 DEFAULT NULL,

tabname IN VARCHAR2 DEFAULT NULL)

RETURN VARCHAR2;

Parameters details:
AUTOSTATS_TARGET
CASCADE
CONCURRENT
DEGREE
ESTIMATE_PERCENT
METHOD_OPT
NO_INVALIDATE
GRANULARITY
PUBLISH
INCREMENTAL
INCREMENTAL_STALENESS
INCREMENTAL_LEVEL
STALE_PERCENT
GLOBAL_TEMP_TABLE_STATS
TABLE_CACHED_BLOCKS
OPTIONS

Example: select dbms_stats.get_prefs('PUBLISH') from dual;

Ref: https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68594

Example: exec dbms_stats.set_global_prefs('PUBLISH','TRUE');

Example: select dbms_stats.get_prefs('PUBLISH', 'OKC', 'OKC_K_LINES_B') from dual;

SQL> exec DBMS_STATS.PUBLISH_PENDING_STATS('OKC', 'OKC_K_LINES_B');


PL/SQL procedure successfully completed.
Export Plan baseline from SVC12 to PROD12

Creating staging table


begin
dbms_spm.create_stgtab_baseline(table_owner=>'APPS',
table_name=>'SMB_TRIP');
end;
/

Export plan baseline


declare
my_plans number;
begin
my_plans:=dbms_spm.pack_stgtab_baseline(table_owner=>'APPS',
table_name=>'SMB_TRIP',
creator=>'SYS',
sql_handle=>'SQL_3555e782d4626c4c',
enabled=>'YES');
end;
/

You can get sqlhandle from plan baseline

Export

Import

Import SQL plan baselines


declare
my_plans number;
begin
my_plans:=dbms_spm.unpack_stgtab_baseline(table_owner=>'APPS',
table_name=>'SMB_TRIP',
sql_handle=>'SQL_3555e782d4626c4c',
enabled=>'YES');
end;
/
commit;
Create Database Link

CREATE PUBLIC DATABASE LINK "SVC12_TO_SVC14.DIEBOLD.COM"


CONNECT TO "SIEBEL" IDENTIFIED BY Yw#x0B7qu3xZ0j4cl
USING '(DESCRIPTION= (ADDRESS=(PROTOCOL=tcp)(HOST=ushaa042)(PORT=1541))
(CONNECT_DATA=(SID=svc14)))';

Note: userid should be in capital letters.


Ex: SYSTEM / SIEBEL
Top 10 tables by size

SELECT /*+ PARALLEL(8) */ *


FROM
(SELECT owner,
segment_name,
segment_type,
tablespace_name,
ROUND(bytes/1024/1024/1024,1) "size_in_GB"
FROM dba_segments
-- WHERE owner=upper('&owner')
ORDER BY bytes/1024/1024/1024 DESC
)
WHERE rownum < 20
ORDER BY 5 DESC;
Reading execution plans 12.1

https://docs.oracle.com/database/121/TGSQL/tgsql_interp.htm#TGSQL94618
Discoverer Workbooks

Ref: https://docs.oracle.com/cd/B25016_08/doc/dl/bi/B13916_04/scheduled_workbooks.htm
Flush a single SQL ID from Shared Pool

cat multi_plans.sql
set lines 155
set pages 9999
col sql_text for a80
!echo
!echo "*** This script searchs the shared pool for SQL stataments with How_Many (or more) distinct
plans."
-- !echo
select sql_id, count(distinct plan_hash_value) distinct_plans, sql_text
from v$sql
group by sql_id, sql_text
having count(distinct plan_hash_value) >= &how_many
/

cat find_sql.sql
set verify off
set pagesize 999
col username format a13
col prog format a22
col sql_text format a41
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col etime format 9,999,999.99

select sql_id, child_number, plan_hash_value plan_hash, executions execs, elapsed_time/1000000 etime,


(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime, u.username,
sql_text
from v$sql s, dba_users u
where upper(sql_text) like upper(nvl('&sql_text',sql_text))
and sql_text not like '%from v$sql where sql_text like nvl(%'
and sql_id like nvl('&sql_id',sql_id)
and u.user_id = s.parsing_user_id
/

cat flush_sql.sql
DECLARE
name varchar2(50);
version varchar2(3);
BEGIN
select regexp_replace(version,'\..*') into version from v$instance;
if version = '10' then
execute immediate
q'[alter session set events '5614566 trace name context forever']'; -- bug fix for 10.2.0.4 backport
end if;

select address||','||hash_value into name


from v$sqlarea
where sql_id like '&sql_id';

sys.dbms_shared_pool.purge(name,'C',1);

END;
/

Ref: http://kerryosborne.oracle-guy.com/2008/09/flush-a-single-sql-statement/
Proactive performance tuning

Best Practices: Proactively Avoiding Database and Query Performance Issues (Doc ID
1482811.1)
Parameter Recommendations for 12c

Part1
https://blogs.oracle.com/UPGRADE/entry/parameter_recommendations_for_oracle_database

Part2
https://blogs.oracle.com/UPGRADE/entry/parameter_recommendations_for_oracle_database1

http://orabase.org/index.php/2016/02/20/some-parameter-recommendations-for-oracle-12c-upgrade-
your-database-now/
Adaptive query optimization

https://support.oracle.com/epmos/faces/DocumentDisplay?
_afrLoop=371286426765420&id=2031605.1&_afrWindowMode=0&_adf.ctrl-state=31byi69wh_102
EXPLAIN PLAN from sql_id

-- Shared Pool
select * from table(dbms_xplan.display_cursor('SQL_ID',null,'ALL'));

-- AWR
select * from table(dbms_xplan.display_awr('SQL_ID',null,null,'ALL'));
select * from table(dbms_xplan.display_awr('SQL_ID',null,DBID,'ALL'));
Gather Index statistics

exec fnd_stats.gather_index_stats ('AR', 'RA_INTERFACE_LINES_N1', &percent);


DECODE

Example:
Redo Log Switches

Standard
3 to 5 log switches per hour

Log switches by hour


Set lines 120;
set pages 999;
select to_char(first_time,'YYYY-MON-DD') day,
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') "00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') "01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') "02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') "03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') "04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') "05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') "06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') "07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') "08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') "09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') "10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') "11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') "12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') "13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') "14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') "15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') "16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') "17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') "18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') "19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') "20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') "21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') "22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') "23"
from v$log_history group by to_char(first_time,'YYYY-MON-DD') order by day;

Count and size of Redo log space


SELECT A.*,
Round(A.Count#*B.AVG#/1024/1024) Daily_Avg_Mb
FROM
(
SELECT
To_Char(First_Time,'YYYY-MM-DD') DAY,
Count(1) Count#,
Min(RECID) Min#,
Max(RECID) Max#
FROM
v$log_history
GROUP BY
To_Char(First_Time,'YYYY-MM-DD')
ORDER
BY 1 DESC
) A,
(
SELECT
Avg(BYTES) AVG#,
Count(1) Count#,
Max(BYTES) Max_Bytes,
Min(BYTES) Min_Bytes
FROM
v$log
)B
;

Exact time when you get a log switch


-- You need to enter dates

break on day skip 1

select
trunc(first_time) day,
substr(to_char(first_time, 'hh24:mi'), 1, 4) || '0' as time,
count(*) switches
from
v$log_history
where
trunc(first_time) between
trunc(to_date('12.05.2016', 'dd.mm.yyyy'))
and
trunc(to_date('13.05.2016', 'dd.mm.yyyy'))
group by
trunc(first_time),
substr(to_char(first_time, 'hh24:mi'), 1, 4)
having
count(*) > 1
order by 1, 2;
What is inside SQL Profile

select * from dba_sql_profiles where NAME = 'SYS_SQLPROF_0154b9b3877e000e';

SELECT * FROM DBMSHSXP_SQL_PROFILE_ATTR WHERE PROFILE_NAME =


'SYS_SQLPROF_0154b9b3877e000e';
List of last 500 SQL statements ran on ORACLE database
Analyze index

analyze index QA.IDX$$_08660001 validate structure;

select * from index_stats;

https://blogs.oracle.com/sysdba/entry/when_to_rebuild_index
Lock modes

https://jonathanlewis.wordpress.com/2010/06/21/locks/

https://jonathanlewis.wordpress.com/2010/06/21/locks/
Block changes of an Object

select obj.owner, obj.object_name, 


         to_char(sn.BEGIN_INTERVAL_TIME,'RRRR-MON-DD') start_day,
         sum(a.db_block_changes_delta) block_increase
from dba_hist_seg_stat a,
         dba_hist_snapshot sn,
         dba_objects obj
where sn.snap_id = a.snap_id
and obj.object_id = a.obj#
and obj.owner not in ('SYS','SYSTEM')
and end_interval_time between to_timestamp('01-MAY-2016','DD-MON-RRRR') and to_timestamp('13-MAY-
2016','DD-MON-RRRR')
group by obj.owner, obj.object_name, 
         to_char(sn.BEGIN_INTERVAL_TIME,'RRRR-MON-DD')
--order by obj.owner, obj.object_name
order by sum(a.db_block_changes_delta) DESC
;

Ref: Doc ID 1395195.1 


High SWAP troubleshooting in AIX

lsps -s

svmon -P -O summary=basic,unit=MB,sortentity=pgsp

nmon (m)

nmon (t)
pid to sid

col sid format 999999


col username format a20
col osuser format a15
select b.spid,a.sid, a.serial#,a.username, a.osuser
from v$session a, v$process b
where a.paddr= b.addr
and b.spid='&spid'
order by b.spid;
SID to PID

col sid format 999999


col username format a20
col osuser format a15
select a.sid, a.serial#,a.username, a.osuser, b.spid
from v$session a, v$process b
where a.paddr= b.addr
and a.sid='&sid'
order by a.sid;
Remove discoverer reports from database

https://community.oracle.com/thread/965141?start=0&tstart=0
truss output

kread, kwrite: The kernel side of read() or write() calls.

truss -d -o /tmp/a1.txt -p <PID>

truss -d -D -o /tmp/a1.txt -p <PID>

https://docs.oracle.com/cd/E19120-01/open.solaris/819-1634/rfsrefer-44/index.html

http://houseofbrick.com/unix-operating-systems-utilities-for-the-dba-part-2/
LOBSEGMENT to TABLE mapping

select table_name, column_name


from dba_lobs
where
owner ='&lobowner' and segment_name='&lobsegment';

Doc ID 829235.1
Purging programs

GL_JE_LINES
Doc ID 160481.1

OE_ORDER_LINES_HISTORY
Oe_order_lines_history Table (Doc ID 1383772.1)

CSI_PERIOD_CLOSE_SUMMARY:
Doc ID 1463673.1
Finding Trace Files

Trace files will be located in user_dump_dest

If you use parallel queries then some of your trace files will be located in background_dump_dest

https://docs.oracle.com/cd/B28359_01/server.111/b28310/diag006.htm

EXEC DBMS_MONITOR.SESSION_TRACE_ENABLE( session_id => &session_id, serial_num =>


&serial_number, waits=>TRUE, binds=>TRUE );

DBMS_MONITOR.SESSION_TRACE_DISABLE( session_id => x, serial_num => y );


Setting up trace

[prod]-->sqlplus apps/password

SQL*Plus: Release 12.1.0.2.0 Production on Fri May 20 03:50:22 2016

Copyright (c) 1982, 2014, Oracle. All rights reserved.

Last Successful login time: Fri May 20 2016 03:50:22 -04:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL> select sid from V$SESSION where AUDSID = userenv('SESSIONID');

SID
----------
5695

SQL> @sid2pid
Enter value for sid: 5695
old 4: and a.sid='&sid'
new 4: and a.sid='5695'

SID SERIAL# USERNAME OSUSER SPID


------- ---------- -------------------- --------------- ----------------------------------
5695 8198 APPS prodora 24772848

SQL> alter session set sql_trace =true;

Session altered.

SQL> RUN SQL here

Note: Trace file location on GSI Prod


/mnt/oraprod/proddb/diag/rdbms/prod/prod/trace

tkprof prod_ora_19988884.trc ravi11.out sort='(prsela,exeela,fchela)'


Automated Segment Adviser Advice

Ref: Sam Alapati book on 12c Performance Tuning

select
'Segment Advice-------------------------------' || chr(10) ||
'TABLESPACE_NAME :' || tablespace_name || chr(10) ||
'SEGMENT_OWNER :' || segment_owner || chr(10) ||
'SEGMENT_NAME :' || segment_name || chr(10) ||
'ALLOCATED_SPACE :' || allocated_space || chr(10) ||
'RELCAIMABLE_SPACE :' || reclaimable_space || chr(10) ||
'RECOMMENDATIONS :' || recommendations || chr(10) ||
'SOLUTION1 :' || c1 || chr(10) ||
'SOLUTION2 :' || c2 || chr(10) ||
'SOLUTION3 :' || c3 Advice
FROM
TABLE(dbms_space.asa_recommendations('FALSE', 'FALSE', 'FALSE'))
WHERE segment_name = 'JTF_TASKS_B'
;
Recommendations (Sam Alapati)

Auto increment in 12c. Alternative for sequences?

Segment Advisor job is not running in PROD. DBMS_SCHEDULER, schedule it manually???

Any thing running for more than 5 hours?


Generate AWR baselines (Static)

begin
dbms_workload_repository.create_baseline(
start_snap_id => 31261,
end_snap_id => 31264,
baseline_name => 'TUE_12PM_TO_3PMEST',
expiration => null );
end;
/

set linesize 300;


select instance_number,baseline_id,baseline_name,baseline_type from dba_hist_baseline_details;

begin
dbms_workload_repository.drop_baseline('MONDAY_8AM_TO_9AMEST');
end;
/

begin
dbsnmp.bsln_internal.compute_statistics(1,3);
end;
/
ORA-4020 Deadlock while compiling PL/SQL package

alter package XXDBD_DW_STG.XXDBD_DW_ETL_SERV_CALL compile body;

The below query gives us locks on package body:

BREAK ON sid ON lock_id1 ON kill_sid

COL sid FOR 999999

COL lock_type FOR A38

COL mode_held FOR A12

COL mode_requested FOR A12

COL lock_id1 FOR A20

COL lock_id2 FOR A20

COL kill_sid FOR A50

SELECT s.sid,

l.lock_type,

l.mode_held,

l.mode_requested,

l.lock_id1,

'alter system kill session '''|| s.sid|| ','|| s.serial#|| ''' immediate;'
kill_sid

FROM dba_lock_internal l,

v$session s

WHERE s.sid = l.session_id

AND UPPER(l.lock_id1) LIKE '%&package_name%'


AND l.lock_type = 'Body Definition Lock'

/
Number of migration/chained rows

SQL> select name, value from v$sysstat where name = 'table fetch continued row';

NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 2479004704

with a as (select sum(value) total_rows_read from v$sysstat where name like '%table%' and name != 'table
fetch continued row'), b as (select value total_mig_chain_rows from v$sysstat where name = 'table fetch
continued row') select a.total_rows_read, b.total_mig_chain_rows,
b.total_mig_chain_rows/a.total_rows_read pct_rows_mig_or_chained from a, b;
APPEND (vs) APPEND_VALUES (vs) SYS_DL_CURSOR

APPEND showed up in version 8.1

APPEND_VALUES didn’t show up until 11.2

SYS_DL_CURSOR showed up in 9.1


Conditional Index

Ref: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5010.htm#BGEHDECJ
Exadata Compression

Nice article by Arup Nanda: http://www.oracle.com/technetwork/testcontent/o10compression-


082302.html
Compare SQL Hash values for same SQL_ID

If you are on 11g,


there is plan diff package which will help you in comparing the different plans for sql. 

SELECT DISTINCT plan_hash_value


FROM v$sql_plan (replace v$sql_plan with dba_hist_sql_plan if info is flushed from shared pool)
WHERE sql_id ='<>'

Then you can use the below:


dbms_xplan.diff_plan_awr(
sql_id IN VARCHAR2,
plan_hash_value1 IN NUMBER,
plan_hash_value2 IN NUMBER)

eg: select dbms_xplan.diff_plan_awr('abcd123456',9999999,888888) from dual


EUL5_QPP_STATS purge

Ref: http://learndiscoverer.blogspot.com/2008/06/monitoring-and-removing-old-statistics.html

When did those reports last run?


http://learndiscoverer.blogspot.com/2006/12/when-did-those-reports-last-run.html

Clamping down Discoverer:


http://learndiscoverer.blogspot.com/2009/03/clamping-down-discoverer.html

Last accessed date for a workbook:


http://learndiscoverer.blogspot.com/2006/04/last-accessed-date-for-workbook.html

How many users we have using Discoverer?


SELECT DISTINCT
UPPER(QS.QS_DOC_OWNER) USERNAME,
COUNT(QS_ID)REPORTS_RUN,
MAX(QS.QS_CREATED_DATE) LAST_USED,
MIN(QS.QS_CREATED_DATE) FIRST_USED
FROM DISCEUL_US.EUL5_QPP_STATS QS
WHERE QS.QS_CREATED_DATE > '01-JAN-2006' AND
QS_DOC_OWNER IS NOT NULL
GROUP BY UPPER(QS.QS_DOC_OWNER)
ORDER BY MAX(QS.QS_CREATED_DATE);
Ref: http://learndiscoverer.blogspot.com/2008_06_01_archive.html

DBA_JOBS_RUNNING:

Doc ID 375777.1

Doc ID 390304.1

NOTE:370855.1 - How To Store Scheduled Workbooks Results In A Different Schema Other than
APPS?
NOTE:459747.1 - How To Find A Particular Scheduled Workbook's Result Set Table In Discoverer

NOTE:459747.1 - How To Find A Particular Scheduled Workbook's Result Set Table In Discoverer
NOTE:555612.1 - How and When Can You Delete Discoverer Scheduled Workbooks
Action Item: Change priority of Disco jobs?

Information about when the table was created, the creator, last updated date and who updated it
can be obtained from EUL5_BQ_TABLES;
select BQT_TABLE_NAME, BQT_CREATED_BY, BQT_CREATED_DATE from
DISCEUL_US.EUL5_BQ_TABLES order by 3;

Can EUL$ Tables Owned by Apps Schema Be Dropped from the Database? (Doc ID
377012.1)

Result set to Workbook/sheet mapping


Doc ID 747806.1

select distinct BQT_BQ_ID, BQT_CREATED_BY from DISCEUL_US.EUL5_BQ_TABLES where


BQT_TABLE_NAME like 'EUL5_B160215071426Q1R1%';

select BQ_BS_ID from DISCEUL_US.EUL5_BATCH_QUERIES where BQ_ID='12849897';

select BS_BR_ID, BS_SHEET_NAME from DISCEUL_US.EUL5_BATCH_SHEETS where BS_ID='12849892'

select BR_ID, BR_WORKBOOK_NAME from DISCEUL_US.EUL5_BATCH_REPORTS where BR_ID =


'12849885';

Workbook/sheet to Result set mapping:


select BR_ID, BR_WORKBOOK_NAME from DISCEUL_US.EUL5_BATCH_REPORTS where
BR_WORKBOOK_NAME='CB SVC400 Service Contract Lines Billing Streams1286';

select BS_ID, BS_BR_ID, BS_SHEET_NAME from DISCEUL_US.EUL5_BATCH_SHEETS where


BS_BR_ID='12849885';

select BQ_ID, BQ_BS_ID from DISCEUL_US.EUL5_BATCH_QUERIES where BQ_BS_ID='12849892';

select BQT_ID, BQT_BQ_ID, BQT_TABLE_NAME from DISCEUL_US.EUL5_BQ_TABLES where


BQT_BQ_ID='12849897';

select BQT_TABLE_NAME, BQT_CREATED_BY, BQT_CREATED_DATE from


DISCEUL_US.EUL5_BQ_TABLES where BQT_TABLE_NAME='EUL5_B160215071426Q1R1';

Drop orphaned scheduled reports

How to recover deleted discoverer workbook


Last accessed date of a oracle workbook

SELECT
   QS.QS_DOC_OWNER USERNAME,
   QS.QS_DOC_NAME WORKBOOK,
   QS.QS_DOC_DETAILS WORKSHEET,
   MAX(QS.QS_CREATED_DATE) LAST_USED
FROM
   EUL5_QPP_STATS QS
GROUP BY
   QS.QS_DOC_OWNER,
   QS.QS_DOC_NAME,
   QS.QS_DOC_DETAILS;
Ref: http://learndiscoverer.blogspot.com/2006/04/last-accessed-date-for-
workbook.html

SELECT
QS.QS_DOC_OWNER USERNAME,
QS.QS_DOC_NAME WORKBOOK,
QS.QS_DOC_DETAILS WORKSHEET,
MAX(QS.QS_CREATED_DATE) LAST_USED
FROM
DISCEUL_US.EUL5_QPP_STATS QS
WHERE QS_DOC_NAME = 'CB SVC400 Service Contract Lines Billing Streams1286'
GROUP BY QS_DOC_OWNER, QS_DOC_NAME, QS_DOC_DETAILS ;

SDOBNOO CB SVC400 Service Contract Lines Billing Streams1286 Sheet 1 05-MAR-16

Query:
--select count(*) from (
-- SELECT t9.*, T10.blocks*8/1000 SIZE_IN_MB
SELECT SUM(T10.blocks)*8/1024/1024 SIZE_IN_GB
FROM dba_tables T10,
(
SELECT /*+ PARALLEL(8) */
T8.QS_DOC_OWNER USERNAME, T8.QS_DOC_NAME WORKBOOK,T8.QS_DOC_DETAILS,
T7.BQT_TABLE_NAME, MAX(T8.QS_CREATED_DATE)
FROM
(SELECT T6.BR_ID, T6.BR_WORKBOOK_NAME, T5.*
FROM DISCEUL_US.EUL5_BATCH_REPORTS T6,
( SELECT T3.BS_BR_ID, T3.BS_SHEET_NAME, T4.*
FROM DISCEUL_US.EUL5_BATCH_SHEETS T3,
(
SELECT T1.BQ_BS_ID, T2.*
FROM
DISCEUL_US.EUL5_BATCH_QUERIES T1,
(select distinct BQT_BQ_ID, BQT_CREATED_BY, BQT_TABLE_NAME from DISCEUL_US.EUL5_BQ_TABLES)
T2
WHERE T2.BQT_BQ_ID = T1.BQ_ID
) T4
WHERE
T4.BQ_BS_ID = T3.BS_ID
) T5
WHERE T6.BR_ID = T5.BS_BR_ID
) T7,
DISCEUL_US.EUL5_QPP_STATS T8 WHERE
T7.BR_WORKBOOK_NAME = T8.QS_DOC_NAME
-- AND T7.BS_SHEET_NAME = T8.QS_DOC_DETAILS
GROUP BY T8.QS_DOC_OWNER, T8.QS_DOC_NAME,T8.QS_DOC_DETAILS, T7.BQT_TABLE_NAME
) T9
WHERE T9.BQT_TABLE_NAME = T10.TABLE_NAME
-- ORDER BY 6 DESC NULLS LAST
--)
WORKBOOk (vs) Sheet (vs) Result table name

SELECT
T6.BR_WORKBOOK_NAME, T5.BS_SHEET_NAME, T5.BQT_TABLE_NAME
FROM DISCEUL_US.EUL5_BATCH_REPORTS T6,
( SELECT T3.BS_BR_ID, T3.BS_SHEET_NAME, T4.*
FROM DISCEUL_US.EUL5_BATCH_SHEETS T3,
(
SELECT T1.BQ_BS_ID, T2.*
FROM
DISCEUL_US.EUL5_BATCH_QUERIES T1,
(select distinct BQT_BQ_ID, BQT_CREATED_BY, BQT_TABLE_NAME from DISCEUL_US.EUL5_BQ_TABLES)
T2
WHERE T2.BQT_BQ_ID = T1.BQ_ID
) T4
WHERE
T4.BQ_BS_ID = T3.BS_ID
) T5
WHERE T6.BR_ID = T5.BS_BR_ID
;
EUL5_BATCH_REPORTS

 Contains one row per scheduled worksheet


 Contains base information that the user assigned when the schedule was set up

BR_ID - Is the unique ID assigned to this scheduled worksheet.


BR_EU_ID - Is the user ID of the owner of the schedule. This links to the EU_ID in the EUL5_EUL_USERS
tables, from where the EU_USERNAME column will tell you the actual user.
BR_NEXT_RUN_DATE - Contains the date and time of the next run.
BR_EXPIRY - Contains the number of days to keep the results.
BR_NUM_FREQ_UNITS - Contains the number of units to be used between refreshes as qualified by
BR_RFU_ID.
BR_RFU_ID - Contains a code indicating the frequency, where 2000 stands for Minutesminutes, 2001
stands for Hours hours, 2002 stands for Days days, 2003 stands for Weeks weeks, 2004 stands for Months
months, and 2005 stands for Yearsyears.
BR_AUTO_REFRESH - Contains a 0 or a 1, where 1 mean automatically refresh and 0 means execute just
once.
Final Query
Query1
select BQT.BQT_ID, BRR.BRR_ID, BR.BR_ID, BQT.BQT_TABLE_NAME, BQT.BQT_CREATED_DATE,
BRR.BRR_CREATED_DATE, BRR.BRR_RUN_DATE, BRR.BRR_STATE,
BR.BR_NAME, BR.BR_WORKBOOK_NAME, BR.BR_EXPIRY, BR.BR_CREATED_BY, BR.BR_EU_ID,
EU.EU_USERNAME, FU.USER_NAME
FROM
disceul_us.eul5_bq_tables BQT
JOIN DISCEUL_US.EUL5_BR_RUNS BRR ON BQT.BQT_BRR_ID = BRR.BRR_ID
JOIN DISCEUL_US.EUL5_BATCH_REPORTS BR ON BRR.BRR_BR_ID = BR.BR_ID
JOIN DISCEUL_US.EUL5_EUL_USERS EU ON BR.BR_EU_ID = EU.EU_ID
LEFT JOIN APPS.FND_USER FU ON REPLACE(EU.EU_USERNAME,'#','') = FU.USER_ID
;

Query2:
WITH APPS_SEG_INFO AS
(
SELECT /*+ PARALLEL(8) */ SEGMENT_NAME, SUM(BLOCKS)*8/1024 BQT_TABLE_SIZE_MB FROM
dba_segments
where OWNER = 'APPS' AND SEGMENT_NAME like '%EUL5_B%R%' GROUP BY SEGMENT_NAME
)
select
BQT.BQT_TABLE_NAME "TABLE_NAME(BQT)", ASI.BQT_TABLE_SIZE_MB, BQT.BQT_CREATED_DATE
"TABLE_CREATED_DATE(BQT)",
BRR.BRR_CREATED_DATE "TABLE_CREATED_DATE(BRR)", BRR.BRR_RUN_DATE
"SCHEDULE_RUN_DATE(BRR)",
BR.BR_EXPIRY, BR.BR_NAME, BR.BR_WORKBOOK_NAME, FU.USER_NAME
--BQT.BQT_ID, BRR.BRR_ID, BR.BR_ID, BRR.BRR_CREATED_DATE, BRR.BRR_RUN_DATE, BRR.BRR_STATE,
--BQT.BQT_TABLE_NAME, BQT.BQT_CREATED_DATE, BR.BR_NAME, BR.BR_WORKBOOK_NAME,
BR.BR_EXPIRY, BR.BR_CREATED_BY, BR.BR_EU_ID,
--EU.EU_USERNAME, FU.USER_NAME, ASI.BQT_TABLE_SIZE_MB
FROM
disceul_us.eul5_bq_tables BQT
JOIN DISCEUL_US.EUL5_BR_RUNS BRR ON BQT.BQT_BRR_ID = BRR.BRR_ID
JOIN DISCEUL_US.EUL5_BATCH_REPORTS BR ON BRR.BRR_BR_ID = BR.BR_ID
JOIN DISCEUL_US.EUL5_EUL_USERS EU ON BR.BR_EU_ID = EU.EU_ID
LEFT JOIN APPS.FND_USER FU ON REPLACE(EU.EU_USERNAME,'#','') = FU.USER_ID
JOIN APPS_SEG_INFO ASI ON BQT.BQT_TABLE_NAME = ASI.SEGMENT_NAME
ORDER BY ASI.BQT_TABLE_SIZE_MB DESC
;
Query3
WITH APPS_SEG_INFO AS
(
SELECT /*+ PARALLEL(8) */ SEGMENT_NAME, SUM(BLOCKS)*8/1024 BQT_TABLE_SIZE_MB FROM
dba_segments
where OWNER = 'APPS' AND SEGMENT_NAME like '%EUL5_B%R%' GROUP BY SEGMENT_NAME
)
select
BQT.BQT_TABLE_NAME "TABLE_NAME(BQT)", ASI.BQT_TABLE_SIZE_MB, BQT.BQT_CREATED_DATE
"TABLE_CREATED_DATE(BQT)",
BRR.BRR_CREATED_DATE "TABLE_CREATED_DATE(BRR)", BRR.BRR_RUN_DATE
"SCHEDULE_RUN_DATE(BRR)",
BR.BR_NEXT_RUN_DATE, BR.BR_AUTO_REFRESH, BR.BR_EXPIRY, BR.BR_NAME,
BR.BR_WORKBOOK_NAME, FU.USER_NAME
--BQT.BQT_ID, BRR.BRR_ID, BR.BR_ID, BRR.BRR_CREATED_DATE, BRR.BRR_RUN_DATE, BRR.BRR_STATE,
--BQT.BQT_TABLE_NAME, BQT.BQT_CREATED_DATE, BR.BR_NAME, BR.BR_WORKBOOK_NAME,
BR.BR_EXPIRY, BR.BR_CREATED_BY, BR.BR_EU_ID,
--EU.EU_USERNAME, FU.USER_NAME, ASI.BQT_TABLE_SIZE_MB
FROM
disceul_us.eul5_bq_tables BQT
JOIN DISCEUL_US.EUL5_BR_RUNS BRR ON BQT.BQT_BRR_ID = BRR.BRR_ID
JOIN DISCEUL_US.EUL5_BATCH_REPORTS BR ON BRR.BRR_BR_ID = BR.BR_ID
JOIN DISCEUL_US.EUL5_EUL_USERS EU ON BR.BR_EU_ID = EU.EU_ID
LEFT JOIN APPS.FND_USER FU ON REPLACE(EU.EU_USERNAME,'#','') = FU.USER_ID
JOIN APPS_SEG_INFO ASI ON BQT.BQT_TABLE_NAME = ASI.SEGMENT_NAME
ORDER BY ASI.BQT_TABLE_SIZE_MB DESC
;

select MIN(CREATED) from dba_objects where owner = 'APPS' and object_name like '%EUL5_B16%R%'
Display date with hours/minutes information

to_char(date1, 'DD-MON-YYYY HH24:MI')


Select from a partition

SELECT count(*) from xxdbd_dw.xxdbd_dw_fact_contract_mgmt PARTITION(20171)


Oracle table growth prediction

select * FROM table(dbms_space.OBJECT_GROWTH_TREND ('AR', 'RA_INTERFACE_LINES_ALL', 'TABLE'));

Columns in the following order: Timepoint, Space_usage, Space_Alloc, Quality


Table ROWS COUNT

Ref: https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1660875645686
Find out expired reports

select BQT.bqt_table_name, BQT.BQT_CREATED_DATE, BR.BR_EXPIRY


FROM
disceul_us.eul5_bq_tables BQT
JOIN DISCEUL_US.EUL5_BR_RUNS BRR ON BQT.BQT_BRR_ID = BRR.BRR_ID
JOIN DISCEUL_US.EUL5_BATCH_REPORTS BR ON BRR.BRR_BR_ID = BR.BR_ID
JOIN DISCEUL_US.EUL5_EUL_USERS EU ON BR.BR_EU_ID = EU.EU_ID
LEFT JOIN APPS.FND_USER FU ON REPLACE(EU.EU_USERNAME,'#','') = FU.USER_ID
WHERE
BQT.BQT_CREATED_DATE + BR.BR_EXPIRY < sysdate
AND EXISTS (SELECT table_name from dba_tables t1 where t1.table_name = BQT.BQT_TABLE_NAME);
Find out AUTO Refresh reports

select FU.USER_NAME, FU.description,


BR.BR_CREATED_DATE,BR.BR_NEXT_RUN_DATE,
'Every ' || br.br_num_freq_units || ' ' ||
DECODE(br.br_rfu_id,
2000,'Minutes',
2001,'Hours',
2002,'Days',
2003,'Weeks',
2004,'Months','Years') "Run Frequency",
BR.BR_WORKBOOK_NAME, BR.BR_NUM_FREQ_UNITS,BR.BR_RFU_ID,
BR.* from
DISCEUL_US.EUL5_BATCH_REPORTS BR, APPS.FND_USER FU
where
BR.BR_AUTO_REFRESH = 1
AND replace(BR.BR_CREATED_BY, '#','') = FU.USER_ID
ORDER BY USER_NAME;
BRR_STATE

How to Cancel or Delete a Scheduled Discoverer Report When Running (Doc ID 429415.1)

BRR_STATE will have 9 allowable values from 1 to 9 and the description of each value are as
follows

BRR_STATE = 1 --> 'Scheduled'


BRR_STATE = 2 --> 'Running Query'
BRR_STATE = 3 --> 'Error Submiting'
BRR_STATE = 4 --> 'Error While Running Query'
BRR_STATE = 5 --> 'Report Deleted'
BRR_STATE = 6 --> 'EUL has Changed'
BRR_STATE = 7 --> 'Report Expired'
BRR_STATE = 8 --> 'Reached Max Rows Limit'
BRR_STATE = 9 --> 'Report Ready'
Disco clean up

1. Drop tables first


spool 1.drop_eul5.sql
select distinct 'DROP TABLE APPS.' || BQT.BQT_TABLE_NAME || ' purge;'
from
disceul_us.eul5_BQ_Tables BQT
JOIN DISCEUL_US.EUL5_BATCH_QUERIES BQ ON BQT.BQT_BQ_ID = BQ.BQ_ID
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BQ.BQ_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
;
spool off;

2. Delete from EUL5_BQ_TABLES


spool 2.eul5_bq_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUL5_BQ_TABLES WHERE BQT_ID = ' || BQT_ID || ';'
from
disceul_us.eul5_BQ_Tables BQT
JOIN DISCEUL_US.EUL5_BATCH_QUERIES BQ ON BQT.BQT_BQ_ID = BQ.BQ_ID
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BQ.BQ_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
;
spool off

3. Delete from EUL5_BQ_DEPS


spool 3.eul5_bq_deps_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUL5_BQ_DEPS WHERE BQD_ID = ' || BQD.BQD_ID || ';'
from
disceul_us.EUL5_BQ_DEPS BQD
JOIN DISCEUL_US.EUL5_BATCH_QUERIES BQ ON BQD.BQD_BQ_ID = BQ.BQ_ID
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BQ.BQ_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
;
spool off

4. Delete from eul5_segments


spool 4.eul5_seg_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUl5_SEGMENTS WHERE SEG_ID = ' || SEG.SEG_ID || ';'
from
disceul_us.EUl5_SEGMENTS SEG
JOIN DISCEUL_US.EUL5_BATCH_QUERIES BQ ON SEG.SEG_BQ_ID = BQ.BQ_ID
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BQ.BQ_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7);
spool off;

5. Delete from eul5_batch_queries


spool 5.eul5_bq_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUL5_BATCH_QUERIES WHERE BQ_ID = ' || BQ.BQ_ID || ';'
from
DISCEUL_US.EUL5_BATCH_QUERIES BQ
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BQ.BQ_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
;
spool off;

6. delete from eul5_batch_params


spool 6.eul5_bp_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUL5_BATCH_PARAMS WHERE BP_ID = ' || BP.BP_ID || ';'
from
DISCEUL_US.EUL5_BATCH_PARAMS BP
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BP.BP_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
;
spool off;

7. delete from eul5_batch_sheets


spool 7.eul5_bs_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUL5_BATCH_SHEETS WHERE BS_ID = ' || BS.BS_ID || ';'
from
DISCEUl_US.EUL5_BATCH_SHEETS BS
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
;
spool off;
8. delete from eul5_br_runs
spool 8.eul5_br_delete.sql
set pagesize 0
SELECT DISTINCT 'DELETE FROM DISCEUL_US.EUL5_BR_RUNS WHERE BRR_ID = ' || BRR.BRR_ID || ';'
from DISCEUl_US.EUL5_BR_RUNS BRR WHERE BRR.BRR_STATE IN (4, 7)
;
spool off;

9. Run all
@1.drop_eul5.sql
@2.eul5_bq_delete.sql
@3.eul5_bq_deps_delete.sql
@4.eul5_seg_delete.sql
@5.eul5_bq_delete.sql
@6.eul5_bp_delete.sql
@7.eul5_bs_delete.sql
@8.eul5_br_delete.sql
commit;
Find out DISCO user sessions

select sid,serial#,module, program, username, to_char (logon_time,'hh24:mi:ss') from v$session where


module like '%Disc%' ;

PROGRAM is an important column. Program shows dis51ws@machinename for both users and scheduled
workbooks with an addition at the end with the user workbook having (TNS- V1-V3). The scheduled
workbook shows (J000) appended to the program name.
Job ID to DISCO user mapping

select a.sid, a.job, b.br_name, a.THIS_DATE||' ' || a.THIS_SEC "Start time", FU.user_name from
dba_jobs_running a, disceul_us.eul5_batch_reports b, applsys.fnd_user fu
where a.job = b.br_job_id
AND replace(b.BR_CREATED_BY, '#','') = FU.USER_ID;
DISCO sessions running over 3 hours

set linesize 300


set pagesize 300
col user_id for a7
col module for a40
col program for a35
col REPORT_NAME for a40
col USER_LOGON_TIME a20
col logon_hours a5
select vs.sid,vs.serial#,
CASE WHEN USER_NAME IS NULL THEN substr(MODULE,10,7) ELSE USER_NAME END USER_ID,
CASE WHEN SUBSTR(MODULE,8,1) IS NULL THEN 'Scheduled' ELSE 'ADHOC' END Report_TYPE,
vs.module, vs.program, to_char(vs.logon_time, 'DD-MON-YY HH24:MM:SS') user_logon_time,
trunc ((sysdate - vs.logon_time) * 24,2) logon_hours,
djr.br_name REPORT_NAME
-- , djr.report_start_time, djr.HEALTH_CHK_TIME, djr.user_name
from v$session vs
LEFT JOIN (select a.sid, a.job, b.br_name, a.THIS_DATE||' ' || a.THIS_SEC Report_Start_time,
to_char(SYSDATE, 'DD-MON-YY HH24:MM:SS') HEALTH_CHK_TIME, FU.user_name from
dba_jobs_running a, disceul_us.eul5_batch_reports b, applsys.fnd_user fu
where a.job = b.br_job_id
AND replace(b.BR_CREATED_BY, '#','') = FU.USER_ID ) djr ON vs.sid = djr.sid
where vs.module like '%Disc%' AND trunc ((sysdate - vs.logon_time) * 24,2) >= 3
ORDER BY trunc ((sysdate - vs.logon_time) * 24,2) desc
;
LEAD function

select owner, segment_name, BLOCKS, INSERT_DATE,


row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_14DAY
, blocks - LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_14DAY
, LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_21DAY
, blocks - LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_21DAY
, LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_30DAY
, blocks - LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_30DAY
from monuser.v_gsi_dba_segments
WHERE SEGMENT_NAME = 'ABM_ACC_MAP_SUM_REP'

SELECT * FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_14DAY
, blocks - LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_14DAY
, LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_21DAY
, blocks - LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_21DAY
, LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_30DAY
, blocks - LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_30DAY
from monuser.v_gsi_dba_segments
-- WHERE SEGMENT_NAME = 'ABM_ACC_MAP_SUM_REP'
) T1
WHERE T1.rno = 1
AND blocks_diff_1day > 0 AND OWNER != 'SYS' ORDER BY blocks_diff_1day desc
;

SELECT * FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, INSERT_DATE - 1 DATE_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, INSERT_DATE - 2 DATE_2DAY
, LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_3DAY
, blocks - LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_3DAY
, INSERT_DATE - 3 DATE_3DAY
, LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_4DAY
, blocks - LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_4DAY
, INSERT_DATE - 4 DATE_4DAY
, LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_5DAY
, blocks - LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_5DAY
, INSERT_DATE - 5 DATE_5DAY
, LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_6DAY
, blocks - LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_6DAY
, INSERT_DATE - 6 DATE_6DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, INSERT_DATE - 7 DATE_7DAY
, LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_8DAY
, blocks - LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_8DAY
, INSERT_DATE - 8 DATE_8DAY
, LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC )
BLOCKS_PREV_9DAY
, blocks - LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_9DAY
, INSERT_DATE - 9 DATE_9DAY
, LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_10DAY
, blocks - LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_10DAY
, INSERT_DATE - 10 DATE_10DAY
, LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_11DAY
, blocks - LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_11DAY
, INSERT_DATE - 11 DATE_11DAY

, LEAD(blocks,12,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_12DAY
, blocks - LEAD(blocks,12,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_12DAY
, INSERT_DATE - 12 DATE_12DAY

, LEAD(blocks,13,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_13DAY
, blocks - LEAD(blocks,13,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_13DAY
, INSERT_DATE - 13 DATE_13DAY
, LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_14DAY
, blocks - LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_14DAY
, INSERT_DATE - 14 DATE_14DAY
, LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_21DAY
, blocks - LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_21DAY
, INSERT_DATE - 21 DATE_21DAY
, LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_30DAY
, blocks - LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_30DAY
, INSERT_DATE - 30 DATE_30DAY
from monuser.v_gsi_dba_segments
-- WHERE SEGMENT_NAME = 'ABM_ACC_MAP_SUM_REP'
) T1
WHERE T1.rno = 1
-- AND blocks_diff_1day > 0
AND T1.OWNER != 'SYS' AND T1.SEGMENT_NAME = 'GL_IMPORT_REFERENCES'
;

MTL_MATERIAL_TRANSACTIONS_IDX2

SELECT OWNER, SEGMENT_NAME,


INSERT_DATE, BLOCKS, BLOCKS*8/1024 growth_MB_0day

,DATE_1DAY, BLOCKS_PREV_1DAY,

CASE WHEN BLOCKS_PREV_1DAY = 0 THEN -1 ELSE BLOCKS_DIFF_1DAY END BLOCKS_DIFF_1DAY


,CASE WHEN BLOCKS_PREV_1DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_1DAY)*8/1024 END
growth_MB_1day

,DATE_2DAY, BLOCKS_PREV_2DAY, BLOCKS_DIFF_2DAY, (BLOCKS - BLOCKS_PREV_2DAY)*8/1024


growth_MB_2day
,DATE_3DAY, BLOCKS_PREV_3DAY, BLOCKS_DIFF_3DAY, (BLOCKS - BLOCKS_PREV_3DAY)*8/1024
growth_MB_3day
,DATE_4DAY, BLOCKS_PREV_4DAY, BLOCKS_DIFF_4DAY, (BLOCKS - BLOCKS_PREV_4DAY)*8/1024
growth_MB_4day
,DATE_5DAY, BLOCKS_PREV_5DAY, BLOCKS_DIFF_5DAY, (BLOCKS - BLOCKS_PREV_5DAY)*8/1024
growth_MB_5day
,DATE_6DAY, BLOCKS_PREV_6DAY, BLOCKS_DIFF_6DAY, (BLOCKS - BLOCKS_PREV_6DAY)*8/1024
growth_MB_6day
,DATE_7DAY, BLOCKS_PREV_7DAY, BLOCKS_DIFF_7DAY, (BLOCKS - BLOCKS_PREV_7DAY)*8/1024
growth_MB_7day
FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, INSERT_DATE - 1 DATE_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, INSERT_DATE - 2 DATE_2DAY
, LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_3DAY
, blocks - LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_3DAY
, INSERT_DATE - 3 DATE_3DAY
, LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_4DAY
, blocks - LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_4DAY
, INSERT_DATE - 4 DATE_4DAY
, LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_5DAY
, blocks - LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_5DAY
, INSERT_DATE - 5 DATE_5DAY
, LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_6DAY
, blocks - LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_6DAY
, INSERT_DATE - 6 DATE_6DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, INSERT_DATE - 7 DATE_7DAY
, LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_8DAY
, blocks - LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_8DAY
, INSERT_DATE - 8 DATE_8DAY
, LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC )
BLOCKS_PREV_9DAY
, blocks - LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_9DAY
, INSERT_DATE - 9 DATE_9DAY
, LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_10DAY
, blocks - LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_10DAY
, INSERT_DATE - 10 DATE_10DAY
, LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_11DAY
, blocks - LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_11DAY
, INSERT_DATE - 11 DATE_11DAY

, LEAD(blocks,12,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_12DAY
, blocks - LEAD(blocks,12,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_12DAY
, INSERT_DATE - 12 DATE_12DAY

, LEAD(blocks,13,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_13DAY
, blocks - LEAD(blocks,13,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_13DAY
, INSERT_DATE - 13 DATE_13DAY

, LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_14DAY
, blocks - LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_14DAY
, INSERT_DATE - 14 DATE_14DAY
, LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_21DAY
, blocks - LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_21DAY
, INSERT_DATE - 21 DATE_21DAY
, LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_30DAY
, blocks - LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_30DAY
, INSERT_DATE - 30 DATE_30DAY
from monuser.v_gsi_dba_segments
-- WHERE SEGMENT_NAME = 'ABM_ACC_MAP_SUM_REP'
) T1
WHERE T1.rno = 1
-- AND blocks_diff_1day > 0
AND T1.OWNER != 'SYS' AND SEGMENT_NAME NOT LIKE 'BIN$%'
--AND T1.SEGMENT_NAME = 'GL_IMPORT_REFERENCES'
AND SEGMENT_NAME LIKE 'MTL%IDX%2%'
ORDER BY BLOCKS_DIFF_1DAY DESC;
SELECT OWNER, SEGMENT_NAME,
INSERT_DATE, BLOCKS, BLOCKS*8/1024 growth_MB_0day

,DATE_1DAY, BLOCKS_PREV_1DAY
, CASE WHEN BLOCKS_PREV_1DAY = 0 THEN -1 ELSE BLOCKS_DIFF_1DAY END BLOCKS_DIFF_1DAY
, CASE WHEN BLOCKS_PREV_1DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_1DAY)*8/1024 END
growth_MB_1day

,DATE_2DAY, BLOCKS_PREV_2DAY
, CASE WHEN BLOCKS_PREV_2DAY = 0 THEN -1 ELSE BLOCKS_DIFF_2DAY END BLOCKS_DIFF_2DAY
, CASE WHEN BLOCKS_PREV_2DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_2DAY)*8/1024 END
growth_MB_2day

,DATE_3DAY, BLOCKS_PREV_3DAY
, CASE WHEN BLOCKS_PREV_3DAY = 0 THEN -1 ELSE BLOCKS_DIFF_3DAY END BLOCKS_DIFF_3DAY
, CASE WHEN BLOCKS_PREV_3DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_3DAY)*8/1024 END
growth_MB_3day

,DATE_4DAY, BLOCKS_PREV_4DAY
, CASE WHEN BLOCKS_PREV_4DAY = 0 THEN -1 ELSE BLOCKS_DIFF_4DAY END BLOCKS_DIFF_4DAY
, CASE WHEN BLOCKS_PREV_4DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_4DAY)*8/1024 END
growth_MB_4day

,DATE_5DAY, BLOCKS_PREV_5DAY
, CASE WHEN BLOCKS_PREV_5DAY = 0 THEN -1 ELSE BLOCKS_DIFF_5DAY END BLOCKS_DIFF_5DAY
, CASE WHEN BLOCKS_PREV_5DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_5DAY)*8/1024 END
growth_MB_5day

,DATE_6DAY, BLOCKS_PREV_6DAY
, CASE WHEN BLOCKS_PREV_6DAY = 0 THEN -1 ELSE BLOCKS_DIFF_6DAY END BLOCKS_DIFF_6DAY
, CASE WHEN BLOCKS_PREV_6DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_6DAY)*8/1024 END
growth_MB_6day

,DATE_7DAY, BLOCKS_PREV_7DAY
, CASE WHEN BLOCKS_PREV_7DAY = 0 THEN -1 ELSE BLOCKS_DIFF_7DAY END BLOCKS_DIFF_7DAY
, CASE WHEN BLOCKS_PREV_7DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_7DAY)*8/1024 END
growth_MB_7day

,DATE_14DAY, BLOCKS_PREV_14DAY
, CASE WHEN BLOCKS_PREV_14DAY = 0 THEN -1 ELSE BLOCKS_DIFF_14DAY END BLOCKS_DIFF_14DAY
, CASE WHEN BLOCKS_PREV_14DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_14DAY)*8/1024 END
growth_MB_14day

,DATE_21DAY, BLOCKS_PREV_21DAY
, CASE WHEN BLOCKS_PREV_21DAY = 0 THEN -1 ELSE BLOCKS_DIFF_21DAY END BLOCKS_DIFF_21DAY
, CASE WHEN BLOCKS_PREV_21DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_21DAY)*8/1024 END
growth_MB_21day

,DATE_30DAY, BLOCKS_PREV_30DAY
, CASE WHEN BLOCKS_PREV_30DAY = 0 THEN -1 ELSE BLOCKS_DIFF_30DAY END BLOCKS_DIFF_30DAY
, CASE WHEN BLOCKS_PREV_30DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_30DAY)*8/1024 END
growth_MB_30day

FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, INSERT_DATE - 1 DATE_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, INSERT_DATE - 2 DATE_2DAY
, LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_3DAY
, blocks - LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_3DAY
, INSERT_DATE - 3 DATE_3DAY
, LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_4DAY
, blocks - LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_4DAY
, INSERT_DATE - 4 DATE_4DAY
, LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_5DAY
, blocks - LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_5DAY
, INSERT_DATE - 5 DATE_5DAY
, LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_6DAY
, blocks - LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_6DAY
, INSERT_DATE - 6 DATE_6DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, INSERT_DATE - 7 DATE_7DAY
, LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_8DAY
, blocks - LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_8DAY
, INSERT_DATE - 8 DATE_8DAY
, LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC )
BLOCKS_PREV_9DAY
, blocks - LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_9DAY
, INSERT_DATE - 9 DATE_9DAY
, LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_10DAY
, blocks - LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_10DAY
, INSERT_DATE - 10 DATE_10DAY
, LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_11DAY
, blocks - LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_11DAY
, INSERT_DATE - 11 DATE_11DAY

, LEAD(blocks,12,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_12DAY
, blocks - LEAD(blocks,12,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_12DAY
, INSERT_DATE - 12 DATE_12DAY

, LEAD(blocks,13,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_13DAY
, blocks - LEAD(blocks,13,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_13DAY
, INSERT_DATE - 13 DATE_13DAY

, LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)


BLOCKS_PREV_14DAY
, blocks - LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_14DAY
, INSERT_DATE - 14 DATE_14DAY
, LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_21DAY
, blocks - LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_21DAY
, INSERT_DATE - 21 DATE_21DAY
, LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_30DAY
, blocks - LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_30DAY
, INSERT_DATE - 30 DATE_30DAY
from monuser.v_gsi_dba_segments
-- WHERE SEGMENT_NAME = 'ABM_ACC_MAP_SUM_REP'
) T1
WHERE T1.rno = 1
-- AND blocks_diff_1day > 0
AND T1.OWNER != 'SYS' AND SEGMENT_NAME NOT LIKE 'BIN$%'
--AND T1.SEGMENT_NAME = 'GL_IMPORT_REFERENCES'
-- AND SEGMENT_NAME LIKE 'MTL%IDX%2%'
ORDER BY GROWTH_MB_1DAY DESC NULLS LAST
;

SELECT
NVL(GB_TODAY,-1) GB_TODAY, NVL(GB_1day,-1) GB_1DAY, NVL(gb_2day,-1) GB_2DAY, NVL(gb_3day,-1)
GB_3DAY, NVL(gb_4day,-1) GB_4DAY, NVL(gb_5day,-1) GB_5DAY
, NVL(GB_6day,-1) GB_6day, NVL(GB_7day,-1) GB_7day, NVL(gb_8day,-1) GB_8day, NVL(gb_9day,-1)
GB_9day, NVL(gb_10day,-1) GB_10day
, NVL(gb_11day,-1) GB_11day, NVL(GB_12day,-1) GB_12day, NVL(GB_13day,-1) gb_13day,
NVL(gb_14day,-1) gb_14day, NVL(gb_15day,-1) gb_15day
, NVL(gb_16day,-1) gb_16day, NVL(GB_17day,-1) gb_17day, NVL(GB_18day,-1) gb_18day,
NVL(gb_19day,-1) gb_19day, NVL(gb_20day,-1) gb_20day
, NVL(gb_21day,-1) gb_21day, NVL(GB_22day,-1) gb_22day, NVL(GB_23day,-1) gb_23day,
NVL(gb_24day,-1) gb_24day, NVL(gb_25day,-1) gb_25day
, NVL(gb_26day,-1) gb_26day , NVL(GB_27day,-1) gb_27day, NVL(GB_28day,-1) gb_28day ,
NVL(gb_29day,-1) gb_29day, NVL(gb_30day,-1) gb_30day
, NVL(gb_60day,-1) gb_60day, NVL(gb_90day,-1) gb_90day
FROM
(
SELECT
(select USED_SPACE_GB from v_db_size_info where dname = 'GSIPROD' and to_char(INSERT_DATE,
'YYYYMMDD') = to_char(SYSDATE, 'YYYYMMDD')) GB_TODAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-1, 'YYYYMMDD')) GB_1DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-2, 'YYYYMMDD')) GB_2DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-3, 'YYYYMMDD')) GB_3DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-4, 'YYYYMMDD')) GB_4DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-5, 'YYYYMMDD')) GB_5DAY

, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and


to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-6, 'YYYYMMDD')) GB_6DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-7, 'YYYYMMDD')) GB_7DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-8, 'YYYYMMDD')) GB_8DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-9, 'YYYYMMDD')) GB_9DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-10, 'YYYYMMDD')) GB_10DAY

, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and


to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-11, 'YYYYMMDD')) GB_11DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-12, 'YYYYMMDD')) GB_12DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-13, 'YYYYMMDD')) GB_13DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-14, 'YYYYMMDD')) GB_14DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-15, 'YYYYMMDD')) GB_15DAY

, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and


to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-16, 'YYYYMMDD')) GB_16DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-17, 'YYYYMMDD')) GB_17DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-18, 'YYYYMMDD')) GB_18DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-19, 'YYYYMMDD')) GB_19DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-20, 'YYYYMMDD')) GB_20DAY

, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and


to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-21, 'YYYYMMDD')) GB_21DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-22, 'YYYYMMDD')) GB_22DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-23, 'YYYYMMDD')) GB_23DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-24, 'YYYYMMDD')) GB_24DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-25, 'YYYYMMDD')) GB_25DAY

, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and


to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-26, 'YYYYMMDD')) GB_26DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-27, 'YYYYMMDD')) GB_27DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-28, 'YYYYMMDD')) GB_28DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-29, 'YYYYMMDD')) GB_29DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-30, 'YYYYMMDD')) GB_30DAY

, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and


to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-60, 'YYYYMMDD')) GB_60DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-90, 'YYYYMMDD')) GB_90DAY
from dual
);
DISCO Adhoc Reports

The below query will give list of reports run between 2 dates, ordered by taken time:

SELECT fu.user_name
, fu.description
, q.qs_created_date run_date
, q.qs_doc_name report
, q.qs_doc_details sheet
, q.qs_act_elap_time elapsed_time_seconds
FROM disceul_us.eul5_qpp_stats q
JOIN applsys.fnd_user fu ON q.qs_created_by = '#' || fu.user_id
WHERE 1 = 1
AND q.qs_created_date > '16-AUG-2016' AND q.qs_created_date < '18-AUG-2016'
-- AND fu.user_name = 'SYSADMIN'
-- AND q.qs_doc_name IN ('My Test Report')
-- ORDER BY q.qs_created_date DESC;
ORDER BY q.qs_act_elap_time DESC;

Ref: https://oracle101.co.uk/discoverer/disco-reports-usage-submitted/

https://drdavetaylor.com/2015/06/16/discoverer-reporting-using-sql-to-generate-report-usage-statistics/
Read by other session

http://logicalread.solarwinds.com/oracle-read-by-other-session-wait-event/#.V7v9-Pl96M8
Column usage
begin
dbms_stats.seed_col_usage (null, null, 600);
end;
/

600: in seconds

Column usage information is stored in sys.col_group_usage$ view

SET LONG 100000


SET LINES 300
SET PAGES 5000
select dbms_stats.report_col_usage(‘inv’, ‘MTL_MATERIAL_TRANSACTIONS’) from dual;

select dbms_stats.create_extended_stats (‘inv’, ‘MTL_MATERIAL_TRANSACTIONS’) from dual;

select dbms_stats.create_extended_stats (null, ‘table name’, ‘(c1, c2, c3)’ ) from dual;

Exec dbms_stats.gather_table_stats(user, ‘table name’)


Trace files

Ref: https://docs.oracle.com/cd/B28359_01/server.111/b28310/diag006.htm

SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';

To find the trace file for your current session:

 Submit the following query:


 SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Default Trace File';

The full path to the trace file is returned.

To find all trace files for the current instance:

 Submit the following query:


 SELECT VALUE FROM V$DIAG_INFO WHERE NAME = 'Diag Trace';

The path to the ADR trace directory for the current instance is returned.

To determine the trace file for each Oracle Database process:

 Submit the following query:


SELECT PID, PROGRAM, TRACEFILE FROM V$PROCESS;
Export an SQL Profile

EXEC DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF ( table_name => 'STG_8cvk4p3w12ad5',schema_name


=> 'APPS' );

EXEC DBMS_SQLTUNE.PACK_STGTAB_SQLPROF ( profile_name => 'coe_8cvk4p3w12ad5_2906237818',


staging_table_name => 'STG_8cvk4p3w12ad5', staging_schema_owner => 'APPS' );

exp \'/ as sysdba\' tables=APPS.STG_8cvk4p3w12ad5 file=export_8cvk4p3w12ad5.dmp

scp export_8cvk4p3w12ad5.dmp nandigr@usnca009:/tmp

on usnca009:
cd /tmp

[val6]-->imp \'/ as sysdba\' file=/tmp/export_8cvk4p3w12ad5.dmp FROMUSER=APPS TOUSER=APPS

EXEC DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF ( profile_name => 'coe_8cvk4p3w12ad5_2906237818',


replace => TRUE, staging_table_name => 'STG_8cvk4p3w12ad5', staging_schema_owner => 'APPS' );
Space occupied by expired/errored out discoverer reports

select /*+ PARALLEL(8) */ sum(bytes)/1024/1024/1024 from dba_segments where segment_name in (


select DISTINCT BQT.BQT_TABLE_NAME
from
disceul_us.eul5_BQ_Tables BQT
JOIN DISCEUL_US.EUL5_BATCH_QUERIES BQ ON BQT.BQT_BQ_ID = BQ.BQ_ID
JOIN DISCEUl_US.EUL5_BATCH_SHEETS BS ON BQ.BQ_BS_ID = BS.BS_ID
JOIN DISCEUl_US.EUL5_BATCH_REPORTS BR ON BS.BS_BR_ID = BR.BR_ID
JOIN DISCEUl_US.EUL5_BR_RUNS BRR ON BRR.BRR_BR_ID = BR.BR_ID AND BRR.BRR_STATE IN (4, 7)
);
Opening a session in VNC

Objective is to run VNC clone. The below are steps done on CONV13 application node

[nandigr@usncv508 ~]$ sudo su - conv13mgr


-bash: /mnt/oraconv13/conv13apps/apps/apps_st/appl/APPSconv13_usncv508.env: No s
uch file or directory

[]-->vncserver :5

New 'usncv508:5 (conv13mgr)' desktop is usncv508:5

Starting applications specified in /home/conv13mgr/.vnc/xstartup


Log file is /home/conv13mgr/.vnc/usncv508:5.log

[]-->ps -ef |grep vnc


9455 32467 1 0 06:23 pts/1 00:00:00 Xvnc :5 -desktop usncv508:5 (conv13mgr) -httpd
/usr/share/vnc/classes -auth /home/conv13mgr/.Xauthority -geometry 1024x768 -depth 16 -rfbwait
30000 -rfbauth /home/conv13mgr/.vnc/passwd -rfbport 5905 -pn
9455 32474 1 0 06:23 pts/1 00:00:00 vncconfig -iconic
9455 32502 32439 0 06:23 pts/1 00:00:00 grep vnc

On windows: click on: VNC-Viewer-5.0.3-Windows-32bit


password: snow01

after opening VNC Session, ssh to “usnca021”

ssh usnca021
sudo su - conv13ora
RAC Technologies support matrix

Oracle RAC Technologies Certification Matrix:

http://www.oracle.com/technetwork/database/clustering/tech-generic-linux-new-086754.html

Core Oracle Database Certification Information (Doc ID 1306539.1)

http://www.oracle.com/technology/products/database/clustering/index.html
Public (vs) private (vs) virtual

In Oracle RAC clusters, we see three types of IP addresses:

 Public IP:  The public IP address is for the server.  This is the same as any
server IP address, a unique address with exists in /etc/hosts.
 Private IP: Oracle RAC requires "private IP" addresses to manage the CRS, the
clusterware heartbeat process and the cache fusion layer.
 Virtual IP:  Oracle uses a Virtual IP (VIP) for database access.  The VIP must
be on the same subnet as the public IP address.  The VIP is used for RAC
failover (TAF).
Using coe_xfr_sql_profile.sql to force a good plan

Ref1: http://www.bobbydurrettdba.com/2013/07/10/using-coe_xfr_sql_profile-sql-to-force-a-good-plan/

Ref2: [prodora@usnca003a:/home/prodora/users/ravi/EDC]
sql_id hash_value of a query

SQL> select * from dual;

SQL> select sql_id, hash_value from v$sql where sql_text = 'select * from dual';

SQL_ID HASH_VALUE

------------- ----------

a5ks9fhw2v9s1 942515969

You might also like