Oracle Runbook
Oracle Runbook
Oracle Runbook
pid to sid.....................................................................................................................................................136
SID to PID....................................................................................................................................................137
Remove discoverer reports from database................................................................................................138
truss output................................................................................................................................................139
LOBSEGMENT to TABLE mapping...............................................................................................................140
Purging programs.......................................................................................................................................141
GL_JE_LINES............................................................................................................................................141
OE_ORDER_LINES_HISTORY...................................................................................................................141
CSI_PERIOD_CLOSE_SUMMARY:............................................................................................................141
Finding Trace Files......................................................................................................................................142
Setting up trace...........................................................................................................................................143
Automated Segment Adviser Advice..........................................................................................................144
Recommendations (Sam Alapati)................................................................................................................145
Generate AWR baselines (Static)................................................................................................................146
ORA-4020 Deadlock while compiling PL/SQL package................................................................................147
Number of migration/chained rows...........................................................................................................149
APPEND (vs) APPEND_VALUES (vs) SYS_DL_CURSOR.................................................................................150
Conditional Index........................................................................................................................................151
Exadata Compression.................................................................................................................................152
Compare SQL Hash values for same SQL_ID..............................................................................................153
EUL5_QPP_STATS purge.............................................................................................................................154
Result set to Workbook/sheet mapping.................................................................................................155
Workbook/sheet to Result set mapping:................................................................................................155
Drop orphaned scheduled reports..........................................................................................................155
How to recover deleted discoverer workbook........................................................................................155
Last accessed date of a oracle workbook................................................................................................156
Query:.....................................................................................................................................................156
WORKBOOk (vs) Sheet (vs) Result table name...........................................................................................158
EUL5_BATCH_REPORTS..............................................................................................................................159
Final Query.................................................................................................................................................160
Query1....................................................................................................................................................160
Query2:...................................................................................................................................................160
Query3....................................................................................................................................................161
Display date with hours/minutes information............................................................................................162
Select from a partition................................................................................................................................163
Oracle table growth prediction...................................................................................................................164
Table ROWS COUNT....................................................................................................................................165
Find out expired reports.............................................................................................................................166
Find out AUTO Refresh reports...................................................................................................................167
BRR_STATE..................................................................................................................................................168
Disco clean up.............................................................................................................................................169
1. Drop tables first..............................................................................................................................169
2. Delete from EUL5_BQ_TABLES.......................................................................................................169
3. Delete from EUL5_BQ_DEPS...........................................................................................................169
4. Delete from eul5_segments............................................................................................................169
5. Delete from eul5_batch_queries....................................................................................................170
6. delete from eul5_batch_params.....................................................................................................170
7. delete from eul5_batch_sheets......................................................................................................170
8. delete from eul5_br_runs...............................................................................................................171
9. Run all.............................................................................................................................................171
Find out DISCO user sessions......................................................................................................................172
Job ID to DISCO user mapping....................................................................................................................173
DISCO sessions running over 3 hours..........................................................................................................174
LEAD function.............................................................................................................................................175
DISCO Adhoc Reports.................................................................................................................................187
Read by other session.................................................................................................................................188
Column usage.............................................................................................................................................189
Trace files....................................................................................................................................................190
Export an SQL Profile..................................................................................................................................191
Space occupied by expired/errored out discoverer reports.......................................................................192
Opening a session in VNC...........................................................................................................................193
RAC Technologies support matrix...............................................................................................................194
Public (vs) private (vs) virtual......................................................................................................................195
Using coe_xfr_sql_profile.sql to force a good plan.....................................................................................196
Small Table Threshold
In 11g, if table size falls outside _small_table_threshold and optimizer decides to go for full table scan,
then the table is read into PGA rather than SGA.
BLOCKS
----------
197669
Conclusion: Here 197,669 > 101,320 XXDBD_BR_CALL_ENTRY qualify for direct path reads if full table
scan is initiated on XXDBD_BR_CALL_ENTRY table by optimizer
SKIP INDEX
Skip index scans happen for composite indexes only. Composite Index is nothing but you have multiple
columns as part of an Indexed column. For example:
Index skip scans improve index scans by nonprefix columns. Skip scanning lets a composite index be split
logically into smaller subindexes. In skip scanning, the initial column of the composite index is not
specified in the query. In other words, it is skipped.
The number of logical subindexes is determined by the number of distinct values in the initial column. Skip
scanning is advantageous if there are few distinct values in the leading column of the composite Index and
many distinct values in the nonleading key of the Index.
Scenario:
W
-
y
Row Movement
http://www.databasejournal.com/features/oracle/article.php/3676401/Row-Movement-in-Oracle.htm
Table Fragmentation
http://select-star-from.blogspot.com/2013/09/how-to-check-table-fragmentation-in.html
ORA-01654: unable to extend
Resolution:
select TABLESPACE_NAME, FILE_NAME from dba_data_files where tablespace_name =
'APPS_TS_TX_DATA'
Received below error while doing table reorg online using “Alter table move”
Solution:
SQL> commit force '45.7.1598909';
Commit complete.
Table altered.
Query:Table fragmentation check
select table_name,round((blocks*8),2)/1024/1024 "size (GB)" ,
round((num_rows*avg_row_len/1024),2)/1024/1024 "actual_data (GB)",
(round((blocks*8),2) - round((num_rows*avg_row_len/1024),2))/1024/1024
"wasted_space (GB)",
(((round((blocks*8),2) -
round((num_rows*avg_row_len/1024),2))/1024/1024)/(round((blocks*8),2)/1024/1024))*100
"Wasted_Per"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
-- AND round((blocks*8),2)/1024/1024 > 1
and table_name = 'JTF_TASKS_B'
order by 2 asc
Query: Order by wasted space percentage descending (Table size > 2GB)
select table_name,round((blocks*8),2)/1024/1024 "size (GB)" ,
round((num_rows*avg_row_len/1024),2)/1024/1024 "actual_data (GB)",
(round((blocks*8),2) - round((num_rows*avg_row_len/1024),2))/1024/1024
"wasted_space (GB)",
(((round((blocks*8),2) -
round((num_rows*avg_row_len/1024),2))/1024/1024)/(round((blocks*8),2)/1024/1024))*100
"Wasted_Per"
from dba_tables
where (round((blocks*8),2) > round((num_rows*avg_row_len/1024),2))
AND round((blocks*8),2)/1024/1024 >= 2
order by 5 desc
Tables based upon usage (hot tables)
drop table temp1;
INSERT INTO temp1 SELECT object_name, count(*) FROM v$sql_plan WHERE operation = 'TABLE ACCESS'
GROUP BY object_name ORDER BY COUNT (*) DESC;
***
drop table temp1;
INSERT INTO temp1 SELECT object_name, count(*) FROM v$sql_plan WHERE operation = 'TABLE ACCESS'
GROUP BY object_name ORDER BY COUNT (*) DESC;
INSERT INTO temp1 SELECT object_name, count(*) FROM v$sql_plan WHERE operation = 'TABLE ACCESS'
GROUP BY object_name ORDER BY COUNT (*) DESC;
table fetch continued row is a row that was fetched (be it by rowid or scanning) that either
didn't fit on a block or in the case of a rowid fetch - has migrated. It does not MEAN the row
was migrated, it could be that the row is just too large to fit on a single block.
table scan blocks/rows gotten - self defining, just counters of rows/blocks retrieved during
full scans
direct read scans are full scans that bypass the buffer cache (much like parallel query
always used to to - but in 11g parallel query might not do anymore - it depends). We read
directly from disk into your pga to process the data, the buffer cache is deemed far too
inefficient.
table scans long tables - big tables that were full scanned and placed into the buffer cache
in a manner so as to not fill up the buffer cache. Typically as you are scanning the table -
you are using a very very small set of blocks in the buffer cache over and over - so the
table basically overwrites itself in the cache. This prevents a 5gb full table scan from
obliterating a 4gb db cache size. That 5gb scan would only use a very small section of the
buffer cache.
table scans rowid ranges - self describing, scanning a portion of a table based on rowid
ranges.
table scans short tables - table is deemed small enough to fit comfortably in the buffer
cache so unlike the long table full table scan - it does not overwrite itself in the cache.
the method for determining a small table is constantly being revised - so the process is a
little different from the 7.x days but the net result is the same. Consider any table greater
than 5% the size of the cache to be a large table an anything less is probably small.
TOP:10 tables by physical/disk IO
By Physical IO:
select owner,segment_name,object_type,total_physical_reads
from ( select owner,object_name as segment_name,object_type,
value as total_physical_reads
from v$segment_statistics
where statistic_name in ('physical reads')
order by total_physical_reads desc)
where rownum <11;
By Logical IO:
select owner,segment_name,object_type,total_logical_reads
from ( select owner,object_name as segment_name,object_type,
value as total_logical_reads
from v$segment_statistics
where statistic_name in ('logical reads')
order by total_logical_reads desc)
where rownum <11;
Logical and physical reads on an individual table:
select
to_char(begin_interval_time,'yy/mm/dd/hh24') c1,
logical_reads_total c2,
physical_reads_total c3, table_scans_total, table_scans_delta
from
dba_hist_seg_stat s,
dba_hist_seg_stat_obj o,
dba_hist_snapshot sn
where
o.owner = 'AR'
and
s.obj# = o.obj#
and
sn.snap_id = s.snap_id
and
object_name = 'RA_INTERFACE_LINES_ALL'
order by
begin_interval_time desc;
AWR Snapshot
Generating random AWR snapshot:
EXEC dbms_workload_repository.create_snapshot;
Cursor Leaks
select oc.sid,oc.hash_value,oc.sql_text,count(*)
from v$open_cursor oc,
v$session ss
where
oc.sid = ss.sid and
oc.sql_text not like '%COMMIT%'
group by oc.sid,oc.hash_value,oc.sql_text
having count(*) > 5
order by 4;
Metalink: 248857.1
Ref: http://oracle-base.com/articles/10g/active-session-history.php
Exadata best practices (x2-2/x2-8)
Doc ID 757552.1
Reading AWR report
Oracle time-series report, STATSPACK and AWR, is originally created exclusively for use by Oracle
Technical Support Services.
By enabling AMM, you allow Oracle to dynamically change the sizes of the major SGA pools such as the
shared_pool_size, db_32k_cache_size, db_16k_cache_size, large_pool_size and so on.
Please note AMM is reactive tool that changes the SGA after detecting that the database has experienced
degradation. An experienced DBA can do much better job than AMM!
AMM has been enhanced in 11g, but it is still primarily designed for small databases where it is not
feasible to proactively optimize the major RAM pools.
OLTP Workload:
Transactions/second > 1
db file sequential reads wait in the three top-5 timed events is greater than 10 reads/second
Ref: http://troype.com/?blog5
http://www.slideshare.net/karlarao/mining-the-awr-repository-for-capacity-planning-and-visualization-
white-p
Physical reads (vs) Direct path reads
Note: R_PCT: How much percentage of total physical reads are direct path reads
Queries
exec dbms_sqltune.drop_sql_profile('SYS_SQLPROF_014d5317f7dd0002');
exec dbms_sqltune.alter_sql_profile('SYS_SQLPROF_014fb30a089a0009','STATUS','DISABLED');
exec dbms_sqltune.alter_sql_profile('SYS_SQLPROF_014fb30a089a0009','STATUS',ENABLED);
SQL Plan baselines
https://docs.oracle.com/database/121/TGSQL/tgsql_spm.htm#TGSQL658
BI Apps:
alter table xla.xla_distribution_links modify partition AR storage (buffer_pool keep);
alter table xla.xla_distribution_links modify partition AP storage (buffer_pool keep);
(6) Please note that it’s advisable to rebuild indexes after compression.
alter index "JTF"."JTF_TASKS_B_N7" rebuild online compress parallel 20;
(7) Update statistics once again & compare the table size before and after
Note: http://docs.oracle.com/cd/B28359_01/server.111/b28318/schema.htm#CNCPT1132
INI_TRANS
Each block has a block header.
Whenever a transaction locks rows/elements in a block, the respective transaction details are noted in
Transaction Table.
The initial size of the transaction table is specified by the INI_TRANS at object level. For tables, this value
defaults to 1 (indexes defaults to 2).
In this case, Oracle stores the data for the row in a chain of data blocks (one or more) reserved for that
segment.
Because someone updated, row size increased, but there is no free size in the block to hold the updated
row => Oracle migrates the data for the entire row to a new data block, assuming the entire row can fit in
a new block.
Oracle preserves the original row piece of migrated row to point to the new block containing the migrated
row. The rowid of a migrated row does not change.
When a row is chained or migrated, I/O performance associated with this row decreases because Oracle
must scan more than one data block to retrieve the information for the row.
ROWID Pseudocolumn
ROWID Pseudocolumn returns address of the row. This address is combination of:
Object number (Each object has it’s own unique object number in a database)
Tablespace/datafile reference
Datablock reference in a datafile
The position of row in a datablock
prompt -- Save script which we will later use to populate the above table...
insert into my_index_stats
select NAME, HEIGHT, DEL_LF_ROWS, DISTINCT_KEYS, ROWS_PER_KEY,
BLKS_GETS_PER_ACCESS
from INDEX_STATS
-- Note this open line...
prompt
prompt -- Spool listing with validate commands...
col line1 newline
col line2 newline
col line3 newline
set pagesize 0
set echo off
set termout off
set trimspool on
set feed off
set linesize 200
spool /tmp/validate_indexes.sql
select 'prompt Process table '||owner||'.'||table_name||
', index '||index_name||'...' line1,
'validate index '||owner||'.'||index_name||';' line2,
'@/tmp/save_index_stats.sql' line3
from sys.dba_indexes where owner = 'SCOTT'
order by table_name, index_name
/
spool off
set termout on
set feed on
prompt
prompt -- Run script to validate indexes...
@/tmp/validate_indexes.sql
-- Cleanup
drop table my_index_stats
/
! rm /tmp/validate_indexes.sql
! rm /tmp/save_index_stats.sql
prompt
prompt Report is in idxfrag.lst
prompt Done!!!
Index Fragmentation
If deleted entries are more than i.e 15% of current entries then a rebuild would be recommended.
User altered.
Statspack creation
sqlplus / as sysdba
@spdrop.sql
@spcreate.sql
[gato]-->sqlplus perfstat/password
Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> exit
Note:
The best way to check the benefits of setting up a keep pool are (for your version of Oracle) to run
statspack at level 7 (the default is level 5) and check the segment statistics section of the reports to see if
there are any objects which are responsible for a large amount of physical I/O that would give you a lot of
benefit if you could "KEEP" them, without sacrificing a lot of memory that might cause other objects to
start doing more physical I/O.
This level includes all statistics gathered in the lower levels, and additionally gathers
the performance data on highly used segments. RAC specific segment level statistics
are also captured with level 7.
A level 7 snapshot gathers information which determines what segments are more
heavily accessed and contended. With this information, you can decide to modify the
physical layout of some segments or of the tablespaces they reside in. For example, to
better spread the segment IO load, you can add files residing on different disks to a
tablespace storing a heavily accessed segment or you can partition a segment. This
information can also help decide on changing segment attributes values such
as PCTFREE or INITRANS. On a RAC environment, this information allows us to easily
spot the segments responsible for much of the cross-instance traffic.
Logical reads
Db block changes
Physical reads
Physical writes
Physical reads direct
Physical writes direct
Global cache consistent read blocks served (RAC specific)
Global cache current blocks served (RAC specific)
Buffer busy waits
ITL waits
Row lock waits
Although Statspack captures all segment statistics, it reports only the following
statistics that exceed one of the predefined threshold parameters:
The values of the threshold parameters are used when deciding which segment statistics
to collect. If a segment's statistic exceeds a threshold value, all statistics regarding this
segment are captured during the snapshot.The threshold levels used are either those
stored in the table stats$statspack_parameter, or by the thresholds specified when the
snapshot is taken.
v$active_session_history
To find out users simultaneously submitting something: For example, timesheets submission
Ref: http://www.oracle.com/technetwork/issue-archive/2012/12-jul/o42dba-1566567.html
Step1: Find out the session ID. You can get session ID using user id.
Step2: if you don’t have session ID, you can get list of sessions waiting using below query:
***
select sid,username,status,
decode(state, 'WAITING','Waiting',
'Working') state,
decode(state,
'WAITING',
'So far '||seconds_in_wait,
'Last waited '||
wait_time/100)||
' secs for '||event
"Description",
blocking_session, blocking_instance, row_wait_obj#, row_wait_file#, row_wait_block#,
row_wait_row#,
program, osuser, machine, terminal, service_name, logon_time, last_call_et
from v$session
where
--username = 'APPS' AND
EVENT like '%lock%'
order by seconds_in_wait desc;
***
Note2: SQL*Net message from client ==> Idle. There is no specific state that
says, a session is Idle.
Step:3
Find out the SQL query that is waiting for
set long 99999
select sql_fulltext
from v$sql
where sql_id = (select sql_id from v$session where sid = 6902)
Step:4
Take blocking_session and get the query on which it is getting blocked
set long 99999
select sql_fulltext
from v$sql
where sql_id = (select sql_id from v$session where sid = 6872)
Step:5
You can get row on which a specific session waiting using below 2 queries:
select row_wait_obj#,
row_wait_file#,
row_wait_block#,
row_wait_row#
from v$session
where sid = 6902;
select *
from &1..&2
where rowid =
dbms_rowid.rowid_create (
rowid_type => 1,
object_number => &3,
relative_fno => &4,
block_number => &5,
row_number => &6
)
Note: last_call_et from v$session tells when the last SQL submitted when a session is in waiting state.
i.e., how long in seconds since the session submitted a query.
select SID, state, event, p1, p2
from v$session
where username = 'ARUP';
The P1 column shows the file ID, and the P2 column shows the block ID. From that information in the result in Listing 7,
you can get the segment name from the extent information in DBA_EXTENTS, as shown below:
select owner, segment_name
from dba_extents
where file_id = 5
and 3011 between block_id
and block_id + blocks;
OWNER SEGMENT_NAME
—————— —————————————
ARUP T1
This shows that the T1 table, owned by ARUP, is being selected from by the disk in the session. You should direct your
attention to this table for tuning. You can move the table to a high-speed disk for faster I/O, or, alternatively, you can
focus on making I/O in this table faster by making changes that affect this table, such as creating new indexes, creating
materialized views, or building a result cache.
Ref: http://www.oracle.com/technetwork/issue-archive/2012/12-nov/o62dba-1741123.html
Ref: http://www.oracle.com/technetwork/issue-archive/2013/13-jan/o13dba-1871177.html
Forcing direct path reads to SGA
XXDBD_BR_CS_GEO_EVENTS:
alter session set "_serial_direct_read"=never;
select 'REGISTRO MAIS ANTIGO: ' || 'Task_Number: '|| Task_Number || ' - ' || 'Geo_Evento_Id: ' ||
Geo_Event_Id || ' - pendente de validação desde: ' || to_CHAR(CREATED_DATE, 'DD/MM/RRRR
HH24:MI:SS') from xxdbd.xxdbd_br_cs_geo_events
where geo_event_id = (select MIN(geo_event_id) from xxdbd.xxdbd_br_cs_geo_events where STATUS =
'NEW' AND CREATED_DATE < SYSDATE - 0.25/24)
;
XXDBD_BR_CALL_ENTRY:
alter session set "_serial_direct_read"=never;
SELECT t1.CALL_ENTRY_ID, t1.SOURCE_ID, t1.STATUS, t1.SR_NUMBER, t1.TASK_NUMBER, t1.LAST_VALIDATION_RESULT,
t1.CALL_ENTRY_TYPE, t1.CUSTOMER_CALL_NUMBER, t1.CUSTOMER_CODE FROM XXDBD.XXDBD_BR_CALL_ENTRY t1
WHERE ( t1.WM_STATUS is null or t1.WM_STATUS <> 'y' ) AND (t1.STATUS <> 'NOVO' AND t1.SOURCE_TYPE =
'GEN.DBD_CALL_ENTRY');
JTF_TASKS_B:
alter session set "_serial_direct_read"=never;
select count(*) FROM(
SELECT /*+ ALL_ROWS */ MAX(TASK_ID), SOURCE_OBJECT_ID
FROM JTF.JTF_TASKS_B WHERE TASK_ID IS NOT NULL GROUP BY SOURCE_OBJECT_ID
);
Check what objects to CACHE in memory
Note: ordered by ‘TABLE ACCESS’ descending. We can add size restriction something like <=100
Latest (Use this): Instead of TABLE SCAN, considering all other references of an object in V$SQL_PLAN
SELECT T1.SNO, T2.OWNER, T2.TABLE_NAME, T2.NUM_ROWS, T2.BLOCKS*8/1024 TABLE_SIZE_MB,
T1.count_object SQL_PLAN_COUNT,
T3.value total_physical_reads
FROM
(select rownum SNO, object_name, count_object
FROM
(
select object_name, count(object_name) count_object from v$sql_plan
where object_name IS NOT NULL AND object_type = 'TABLE'
-- operation = 'TABLE ACCESS'
and object_owner not in ('SYS', 'SYSTEM')
group by object_name
order by 2 desc
) )T1, dba_tables T2 , v$segment_statistics T3
WHERE
T1.object_name = T2.table_name AND
T1.object_name = T3.object_name AND
T3.statistic_name in ('physical reads')
-- AND T2.OWNER = 'APPLSYS'
-- AND T2.BLOCKS*8/1024 <= 200
-- ORDER BY 7 desc
-- ORDER BY T1.SNO
ORDER BY 6 desc
;
http://blog.tanelpoder.com/2011/08/23/v8-bundled-exec-call-and-oracle-program-interface-opi-calls/
Index usage
Ref: http://docs.oracle.com/cd/E24628_01/server.121/e55047/tdppt_awr_warehouse.htm#TDPPT145
Table stats
***Works nice
BEGIN
FOR x IN (SELECT *
FROM dba_tables
WHERE owner = 'SOME_SCHEMA'
AND table_name NOT IN (<<list of MVs>>))
LOOP
dbms_stats.gather_table_stats( x.owner, x.table_name, ... );
END LOOP;
END;
Notes: From Oracle 11g onwards, it is better to leave ESTIMATE_PERCENT at its default value.
http://logicalread.solarwinds.com/solving-oracle-enq-tm-contention-waits-dr01/#.VZDgo_mqqko
Export and Import
Cd /mnt/oracle_stage_NAS/tuning/dumps/gbip
parallel=4
directory=DATA_PUMP_DIR
dumpfile=exp_prod.dmp
logfile=exp_prod.log
Tablespace Free check
After upgrade to 11.2.0.4, immediately do the following in the exact sequence. Run in VNC as it may take a
few hours
exec dbms_stats.gather_schema_stats('SYS');
exec dbms_stats.gather_DATAbase_stats( gather_sys=>true);
exec dbms_stats.gather_dictionary_stats;
While the database is live and under regular load, do the following:
exec DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;
Reference: How to Gather Statistics on Objects Owned by the 'SYS' User and 'Fixed' Objects (Doc ID
457926.1)
STATISTICS
Checking auto optimizer statistics collection is running
select client_name, status from dba_autotask_client where client_name like '%stats%'
EXEC DBMS_AUTO_TASK_ADMIN.disable
EXEC DBMS_AUTO_TASK_ADMIN.enable
Doc ID
1968577.1
Tables
DBA_AUTOTASK_CLIENT
DBA_AUTOTASK_JOB_HISTORY
DBA_AUTOTASK_WINDOW HISTORY
DBA_SCHEDULER_JOB_RUN_DETAILS
DBA_AUTOTASK_TASK
Parameters
_optimizer_autostats_job
Queries
select client_name,status from dba_autotask_client where client_name like
'%stats%';
select
session_id,LOG_DATE,REQ_START_DATE,ACTUAL_START_DATE,STATUS,RUN_DURATION,ADDITIONAL_INF
O
from DBA_SCHEDULER_JOB_RUN_DETAILS where JOB_NAME like 'ORA$AT_OS_OPT_SY_%' order by
ACTUAL_START_DATE desc;
SELECT a.ksppinm as "Parameter", a.ksppdesc as "Description", b.ksppstvl as
"Session Value", c.ksppstvl FROM x$ksppi a, x$ksppcv b, x$ksppsv c WHERE
a.indx = b.indx AND a.indx = c.indx
AND a.ksppinm LIKE '%optimizer_autostats_job%' ;
DBMS_STATS.GATHER_SCHEMA_STATS
It is important to note that using DBMS_STATS.GATHER_SCHEMA_STATS on an E-Business suite database
is not supported.
To unlock: dbms_stats.unlock_schema_stats
Gather table statistics: This concurrent request is for standard tables. This uses FND_STATS package to
facilitate collection of these statistics
select * from dba_tab_histograms where table_name = 'CUSTOMER_RK'; ==> You will see same
histograms data as previous
EXEC dbms_stats.gather_table_stats(user, 'CUSTOMER_RK'); ==> This will change histograms based upon
sample queries on customer_rk table
If you don’t want that, you can just get endpoints (one bucket - high and low values and number of values
- this is useful stuff generally) by doing this:
Now, if you really don’t want column information, you can do below:
https://blogs.oracle.com/optimizer/entry/how_does_the_method_opt
Verifying CBO statistics
Doc ID 163208.1
https://support.oracle.com/epmos/faces/DocumentDisplay?
_afrLoop=360124379002405&parent=DOCUMENT&sourceId=744143.1&id=163208.1&_afrWindowMode
=0&_adf.ctrl-state=pecfzhfi9_145
Oracle Applications 11i initialization parameters
Doc ID 216205.1
Database Size
select * from dba_objects where OWNER NOT IN ('SYS', 'SYSTEM') and owner = 'XXDBD_DW'
order by last_ddl_time desc;
Table and Index statistics history
SELECT ob.owner, ob.object_name, ob.object_type, rowcnt, avgrln ,samplesize, analyzetime
FROM sys.WRI$_OPTSTAT_TAB_HISTORY, dba_objects ob
WHERE owner=upper('&OWNER')
and object_name=upper('&TABLE')
and object_type in ('TABLE')
and object_id=obj#
order by analyzetime desc;
works
OEL/Virtual Box setup
https://technology.amis.nl/2013/06/29/how-to-create-a-virtual-box-vm-with-oracle-enterprise-linux-
inside-a-dummy-guide/
Alter Index Unusable script
select 'ALTER INDEX ' || owner || '.' || index_name || ' unusable;' from dba_indexes
where owner ='XXDBD_OBAW'
and TABLE_NAME = 'W_PURCH_ORDER_F'
and index_type = 'BITMAP'
;
Grant EXPLAIN plan privilege
grant insert on plan_table to appsro;
grant ADMINISTER SQL MANAGEMENT OBJECT to dbsnmp; To grant sql baseline implementation
LOG_MODE
------------
NOARCHIVELOG
MV Refresh using DELETE (vs) TRUNCATE
Ref: http://hrivera99.blogspot.in/2011/04/vobjectusage-view-empty.html
Guaranteed undo retention, meaning that undo data will never be overwritten until the time specified by
the undo retention has passed, is enabled at the tablespace level.
Flashback query
select * from
(
select T1.owner, T1.table_name, T1.NUM_ROWS, T1.BLOCKS, T1.ACTUAL_BLOCKS, T1.RATIO,
Count(T2.table_name) number_of_columns
FROM (
select owner, table_name, NUM_ROWS, blocks, avg_row_len, round
((NUM_ROWS*AVG_ROW_LEN*1.16)/8096,2) actual_blocks,
round(blocks/(NUM_ROWS*AVG_ROW_LEN*1.16/8096),2) ratio
from dba_tables
where owner NOT IN ( 'SYS', 'SYSMAN')
-- and table_name = 'STG_TISFC001133_RAVI'
and blocks > 1000 AND NUM_ROWS > 0 AND AVG_ROW_LEN > 0
--order by ((NUM_ROWS*AVG_ROW_LEN*1.16)/8/1024 - blocks) desc
order by 7 desc
) T1, dba_tab_columns T2
where T1.table_name = T2.table_name and T1.owner = T2.owner
--and T1.table_name = 'XXDBD_DW_DIM_SERVICE_CONTRACT'
-- ORDER BY 6 desc
group by T1.owner, T1.table_name, T1.NUM_ROWS, T1.BLOCKS, T1.ACTUAL_BLOCKS, T1.RATIO
)
order by ratio desc;
select 'analyze table ' || OWNER || '.' || TABLE_NAME || ' list chained rows into
xxdbd_dw_stg.chained_rows;'
from dba_tables where owner = 'XXDBD_OBAW';
Method:2
analyze table inv.mtl_txn_request_headers compute statistics;
Query:
select
pid,spid
from
v$process
where
ADDR in (select paddr from v$session where SID in(select sid from v$session where status='KILLED'));
Recovery after killing a large transaction
http://www.runningoracle.com/product_info.php?products_id=330
https://community.oracle.com/thread/469479?start=0&tstart=0
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'REPAIR_TABLE',
TABLE_TYPE => dbms_repair.repair_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'OBAW_DIM');
END;
/
BEGIN
DBMS_REPAIR.ADMIN_TABLES (
TABLE_NAME => 'ORPHAN_KEY_TABLE',
TABLE_TYPE => dbms_repair.orphan_table,
ACTION => dbms_repair.create_action,
TABLESPACE => 'OBAW_DIM');
END;
/
SET SERVEROUTPUT ON
DECLARE num_corrupt INT;
BEGIN
num_corrupt := 0;
DBMS_REPAIR.CHECK_OBJECT (
SCHEMA_NAME => 'XXDBD_OBAW',
OBJECT_NAME => 'W_PRODUCT_XACT_F',
REPAIR_TABLE_NAME => 'REPAIR_TABLE',
CORRUPT_COUNT => num_corrupt);
DBMS_OUTPUT.PUT_LINE('number corrupt: ' || TO_CHAR (num_corrupt));
END;
/
Ref: https://docs.oracle.com/cd/B10501_01/server.920/a96521/repair.htm
http://www.oracle.com/technetwork/database/availability/maa-datacorruption-bestpractices-396464.pdf
CREATE USER
User created.
IMP status
select c.inst_id,c.sid,c.serial#
from
gv$locked_object a,
dba_objects b,
gv$session c,
gv$lock d
where a.object_id=b.object_id and a.session_id=c.sid and c.sid=d.sid and b.object_name =
'&TABLE_NAME';
http://www.dba-oracle.com/t_goldengate_data_selection_filtering.htm
Enable/Disable Archive logging mode
By default, archive logs will be written to the flash recovery area. If you do not want to write archive logs to the flash
recovery area you can set the parameter LOG_ARCHIVE_DEST_n to the location in which you wish to write archive
logs.
shutdown immediate
startup mount
startup mount
sqlplus / as sysdba
startup mount
alter database backup controlfile to trace;
if you tail the above log file, it writes the trace file.
Open trace file and make sure all datafiles are present.
ALTER TABLESPACE "XXDBD" ADD DATAFILE '/mnt/oragbi3/gbi3data/fs01/GBI3/
xxdbd_22.dbf' SIZE 30G
Drop Temp Datafile
One client can prevent other clients from accessing the same object
The client can maintain a dependency for a long time which does not allow another client to
change the object
Ref: http://docs.oracle.com/cd/B19306_01/server.102/b14211/instance_tune.htm#g58652
dba_tab_pending_stats
Ref: http://gavinsoorma.com/2009/09/11g-pending-and-published-statistics/
BEGIN
DBMS_STATS.DELETE_PENDING_STATS('sh','sales');
END;
/
Note: OPTIMIZER_USE_PENDING_STATISTICS
GET_PREFS
Syntax:
DBMS_STATS.GET_PREFS (
pname IN VARCHAR2,
RETURN VARCHAR2;
Parameters details:
AUTOSTATS_TARGET
CASCADE
CONCURRENT
DEGREE
ESTIMATE_PERCENT
METHOD_OPT
NO_INVALIDATE
GRANULARITY
PUBLISH
INCREMENTAL
INCREMENTAL_STALENESS
INCREMENTAL_LEVEL
STALE_PERCENT
GLOBAL_TEMP_TABLE_STATS
TABLE_CACHED_BLOCKS
OPTIONS
Ref: https://docs.oracle.com/database/121/ARPLS/d_stats.htm#ARPLS68594
Export
Import
https://docs.oracle.com/database/121/TGSQL/tgsql_interp.htm#TGSQL94618
Discoverer Workbooks
Ref: https://docs.oracle.com/cd/B25016_08/doc/dl/bi/B13916_04/scheduled_workbooks.htm
Flush a single SQL ID from Shared Pool
cat multi_plans.sql
set lines 155
set pages 9999
col sql_text for a80
!echo
!echo "*** This script searchs the shared pool for SQL stataments with How_Many (or more) distinct
plans."
-- !echo
select sql_id, count(distinct plan_hash_value) distinct_plans, sql_text
from v$sql
group by sql_id, sql_text
having count(distinct plan_hash_value) >= &how_many
/
cat find_sql.sql
set verify off
set pagesize 999
col username format a13
col prog format a22
col sql_text format a41
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col etime format 9,999,999.99
cat flush_sql.sql
DECLARE
name varchar2(50);
version varchar2(3);
BEGIN
select regexp_replace(version,'\..*') into version from v$instance;
if version = '10' then
execute immediate
q'[alter session set events '5614566 trace name context forever']'; -- bug fix for 10.2.0.4 backport
end if;
sys.dbms_shared_pool.purge(name,'C',1);
END;
/
Ref: http://kerryosborne.oracle-guy.com/2008/09/flush-a-single-sql-statement/
Proactive performance tuning
Best Practices: Proactively Avoiding Database and Query Performance Issues (Doc ID
1482811.1)
Parameter Recommendations for 12c
Part1
https://blogs.oracle.com/UPGRADE/entry/parameter_recommendations_for_oracle_database
Part2
https://blogs.oracle.com/UPGRADE/entry/parameter_recommendations_for_oracle_database1
http://orabase.org/index.php/2016/02/20/some-parameter-recommendations-for-oracle-12c-upgrade-
your-database-now/
Adaptive query optimization
https://support.oracle.com/epmos/faces/DocumentDisplay?
_afrLoop=371286426765420&id=2031605.1&_afrWindowMode=0&_adf.ctrl-state=31byi69wh_102
EXPLAIN PLAN from sql_id
-- Shared Pool
select * from table(dbms_xplan.display_cursor('SQL_ID',null,'ALL'));
-- AWR
select * from table(dbms_xplan.display_awr('SQL_ID',null,null,'ALL'));
select * from table(dbms_xplan.display_awr('SQL_ID',null,DBID,'ALL'));
Gather Index statistics
Example:
Redo Log Switches
Standard
3 to 5 log switches per hour
select
trunc(first_time) day,
substr(to_char(first_time, 'hh24:mi'), 1, 4) || '0' as time,
count(*) switches
from
v$log_history
where
trunc(first_time) between
trunc(to_date('12.05.2016', 'dd.mm.yyyy'))
and
trunc(to_date('13.05.2016', 'dd.mm.yyyy'))
group by
trunc(first_time),
substr(to_char(first_time, 'hh24:mi'), 1, 4)
having
count(*) > 1
order by 1, 2;
What is inside SQL Profile
https://blogs.oracle.com/sysdba/entry/when_to_rebuild_index
Lock modes
https://jonathanlewis.wordpress.com/2010/06/21/locks/
https://jonathanlewis.wordpress.com/2010/06/21/locks/
Block changes of an Object
lsps -s
svmon -P -O summary=basic,unit=MB,sortentity=pgsp
nmon (m)
nmon (t)
pid to sid
https://community.oracle.com/thread/965141?start=0&tstart=0
truss output
https://docs.oracle.com/cd/E19120-01/open.solaris/819-1634/rfsrefer-44/index.html
http://houseofbrick.com/unix-operating-systems-utilities-for-the-dba-part-2/
LOBSEGMENT to TABLE mapping
Doc ID 829235.1
Purging programs
GL_JE_LINES
Doc ID 160481.1
OE_ORDER_LINES_HISTORY
Oe_order_lines_history Table (Doc ID 1383772.1)
CSI_PERIOD_CLOSE_SUMMARY:
Doc ID 1463673.1
Finding Trace Files
If you use parallel queries then some of your trace files will be located in background_dump_dest
https://docs.oracle.com/cd/B28359_01/server.111/b28310/diag006.htm
[prod]-->sqlplus apps/password
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
SID
----------
5695
SQL> @sid2pid
Enter value for sid: 5695
old 4: and a.sid='&sid'
new 4: and a.sid='5695'
Session altered.
select
'Segment Advice-------------------------------' || chr(10) ||
'TABLESPACE_NAME :' || tablespace_name || chr(10) ||
'SEGMENT_OWNER :' || segment_owner || chr(10) ||
'SEGMENT_NAME :' || segment_name || chr(10) ||
'ALLOCATED_SPACE :' || allocated_space || chr(10) ||
'RELCAIMABLE_SPACE :' || reclaimable_space || chr(10) ||
'RECOMMENDATIONS :' || recommendations || chr(10) ||
'SOLUTION1 :' || c1 || chr(10) ||
'SOLUTION2 :' || c2 || chr(10) ||
'SOLUTION3 :' || c3 Advice
FROM
TABLE(dbms_space.asa_recommendations('FALSE', 'FALSE', 'FALSE'))
WHERE segment_name = 'JTF_TASKS_B'
;
Recommendations (Sam Alapati)
begin
dbms_workload_repository.create_baseline(
start_snap_id => 31261,
end_snap_id => 31264,
baseline_name => 'TUE_12PM_TO_3PMEST',
expiration => null );
end;
/
begin
dbms_workload_repository.drop_baseline('MONDAY_8AM_TO_9AMEST');
end;
/
begin
dbsnmp.bsln_internal.compute_statistics(1,3);
end;
/
ORA-4020 Deadlock while compiling PL/SQL package
SELECT s.sid,
l.lock_type,
l.mode_held,
l.mode_requested,
l.lock_id1,
'alter system kill session '''|| s.sid|| ','|| s.serial#|| ''' immediate;'
kill_sid
FROM dba_lock_internal l,
v$session s
/
Number of migration/chained rows
SQL> select name, value from v$sysstat where name = 'table fetch continued row';
NAME VALUE
---------------------------------------------------------------- ----------
table fetch continued row 2479004704
with a as (select sum(value) total_rows_read from v$sysstat where name like '%table%' and name != 'table
fetch continued row'), b as (select value total_mig_chain_rows from v$sysstat where name = 'table fetch
continued row') select a.total_rows_read, b.total_mig_chain_rows,
b.total_mig_chain_rows/a.total_rows_read pct_rows_mig_or_chained from a, b;
APPEND (vs) APPEND_VALUES (vs) SYS_DL_CURSOR
Ref: https://docs.oracle.com/cd/B19306_01/server.102/b14200/statements_5010.htm#BGEHDECJ
Exadata Compression
Ref: http://learndiscoverer.blogspot.com/2008/06/monitoring-and-removing-old-statistics.html
DBA_JOBS_RUNNING:
Doc ID 375777.1
Doc ID 390304.1
NOTE:370855.1 - How To Store Scheduled Workbooks Results In A Different Schema Other than
APPS?
NOTE:459747.1 - How To Find A Particular Scheduled Workbook's Result Set Table In Discoverer
NOTE:459747.1 - How To Find A Particular Scheduled Workbook's Result Set Table In Discoverer
NOTE:555612.1 - How and When Can You Delete Discoverer Scheduled Workbooks
Action Item: Change priority of Disco jobs?
Information about when the table was created, the creator, last updated date and who updated it
can be obtained from EUL5_BQ_TABLES;
select BQT_TABLE_NAME, BQT_CREATED_BY, BQT_CREATED_DATE from
DISCEUL_US.EUL5_BQ_TABLES order by 3;
Can EUL$ Tables Owned by Apps Schema Be Dropped from the Database? (Doc ID
377012.1)
SELECT
QS.QS_DOC_OWNER USERNAME,
QS.QS_DOC_NAME WORKBOOK,
QS.QS_DOC_DETAILS WORKSHEET,
MAX(QS.QS_CREATED_DATE) LAST_USED
FROM
EUL5_QPP_STATS QS
GROUP BY
QS.QS_DOC_OWNER,
QS.QS_DOC_NAME,
QS.QS_DOC_DETAILS;
Ref: http://learndiscoverer.blogspot.com/2006/04/last-accessed-date-for-
workbook.html
SELECT
QS.QS_DOC_OWNER USERNAME,
QS.QS_DOC_NAME WORKBOOK,
QS.QS_DOC_DETAILS WORKSHEET,
MAX(QS.QS_CREATED_DATE) LAST_USED
FROM
DISCEUL_US.EUL5_QPP_STATS QS
WHERE QS_DOC_NAME = 'CB SVC400 Service Contract Lines Billing Streams1286'
GROUP BY QS_DOC_OWNER, QS_DOC_NAME, QS_DOC_DETAILS ;
Query:
--select count(*) from (
-- SELECT t9.*, T10.blocks*8/1000 SIZE_IN_MB
SELECT SUM(T10.blocks)*8/1024/1024 SIZE_IN_GB
FROM dba_tables T10,
(
SELECT /*+ PARALLEL(8) */
T8.QS_DOC_OWNER USERNAME, T8.QS_DOC_NAME WORKBOOK,T8.QS_DOC_DETAILS,
T7.BQT_TABLE_NAME, MAX(T8.QS_CREATED_DATE)
FROM
(SELECT T6.BR_ID, T6.BR_WORKBOOK_NAME, T5.*
FROM DISCEUL_US.EUL5_BATCH_REPORTS T6,
( SELECT T3.BS_BR_ID, T3.BS_SHEET_NAME, T4.*
FROM DISCEUL_US.EUL5_BATCH_SHEETS T3,
(
SELECT T1.BQ_BS_ID, T2.*
FROM
DISCEUL_US.EUL5_BATCH_QUERIES T1,
(select distinct BQT_BQ_ID, BQT_CREATED_BY, BQT_TABLE_NAME from DISCEUL_US.EUL5_BQ_TABLES)
T2
WHERE T2.BQT_BQ_ID = T1.BQ_ID
) T4
WHERE
T4.BQ_BS_ID = T3.BS_ID
) T5
WHERE T6.BR_ID = T5.BS_BR_ID
) T7,
DISCEUL_US.EUL5_QPP_STATS T8 WHERE
T7.BR_WORKBOOK_NAME = T8.QS_DOC_NAME
-- AND T7.BS_SHEET_NAME = T8.QS_DOC_DETAILS
GROUP BY T8.QS_DOC_OWNER, T8.QS_DOC_NAME,T8.QS_DOC_DETAILS, T7.BQT_TABLE_NAME
) T9
WHERE T9.BQT_TABLE_NAME = T10.TABLE_NAME
-- ORDER BY 6 DESC NULLS LAST
--)
WORKBOOk (vs) Sheet (vs) Result table name
SELECT
T6.BR_WORKBOOK_NAME, T5.BS_SHEET_NAME, T5.BQT_TABLE_NAME
FROM DISCEUL_US.EUL5_BATCH_REPORTS T6,
( SELECT T3.BS_BR_ID, T3.BS_SHEET_NAME, T4.*
FROM DISCEUL_US.EUL5_BATCH_SHEETS T3,
(
SELECT T1.BQ_BS_ID, T2.*
FROM
DISCEUL_US.EUL5_BATCH_QUERIES T1,
(select distinct BQT_BQ_ID, BQT_CREATED_BY, BQT_TABLE_NAME from DISCEUL_US.EUL5_BQ_TABLES)
T2
WHERE T2.BQT_BQ_ID = T1.BQ_ID
) T4
WHERE
T4.BQ_BS_ID = T3.BS_ID
) T5
WHERE T6.BR_ID = T5.BS_BR_ID
;
EUL5_BATCH_REPORTS
Query2:
WITH APPS_SEG_INFO AS
(
SELECT /*+ PARALLEL(8) */ SEGMENT_NAME, SUM(BLOCKS)*8/1024 BQT_TABLE_SIZE_MB FROM
dba_segments
where OWNER = 'APPS' AND SEGMENT_NAME like '%EUL5_B%R%' GROUP BY SEGMENT_NAME
)
select
BQT.BQT_TABLE_NAME "TABLE_NAME(BQT)", ASI.BQT_TABLE_SIZE_MB, BQT.BQT_CREATED_DATE
"TABLE_CREATED_DATE(BQT)",
BRR.BRR_CREATED_DATE "TABLE_CREATED_DATE(BRR)", BRR.BRR_RUN_DATE
"SCHEDULE_RUN_DATE(BRR)",
BR.BR_EXPIRY, BR.BR_NAME, BR.BR_WORKBOOK_NAME, FU.USER_NAME
--BQT.BQT_ID, BRR.BRR_ID, BR.BR_ID, BRR.BRR_CREATED_DATE, BRR.BRR_RUN_DATE, BRR.BRR_STATE,
--BQT.BQT_TABLE_NAME, BQT.BQT_CREATED_DATE, BR.BR_NAME, BR.BR_WORKBOOK_NAME,
BR.BR_EXPIRY, BR.BR_CREATED_BY, BR.BR_EU_ID,
--EU.EU_USERNAME, FU.USER_NAME, ASI.BQT_TABLE_SIZE_MB
FROM
disceul_us.eul5_bq_tables BQT
JOIN DISCEUL_US.EUL5_BR_RUNS BRR ON BQT.BQT_BRR_ID = BRR.BRR_ID
JOIN DISCEUL_US.EUL5_BATCH_REPORTS BR ON BRR.BRR_BR_ID = BR.BR_ID
JOIN DISCEUL_US.EUL5_EUL_USERS EU ON BR.BR_EU_ID = EU.EU_ID
LEFT JOIN APPS.FND_USER FU ON REPLACE(EU.EU_USERNAME,'#','') = FU.USER_ID
JOIN APPS_SEG_INFO ASI ON BQT.BQT_TABLE_NAME = ASI.SEGMENT_NAME
ORDER BY ASI.BQT_TABLE_SIZE_MB DESC
;
Query3
WITH APPS_SEG_INFO AS
(
SELECT /*+ PARALLEL(8) */ SEGMENT_NAME, SUM(BLOCKS)*8/1024 BQT_TABLE_SIZE_MB FROM
dba_segments
where OWNER = 'APPS' AND SEGMENT_NAME like '%EUL5_B%R%' GROUP BY SEGMENT_NAME
)
select
BQT.BQT_TABLE_NAME "TABLE_NAME(BQT)", ASI.BQT_TABLE_SIZE_MB, BQT.BQT_CREATED_DATE
"TABLE_CREATED_DATE(BQT)",
BRR.BRR_CREATED_DATE "TABLE_CREATED_DATE(BRR)", BRR.BRR_RUN_DATE
"SCHEDULE_RUN_DATE(BRR)",
BR.BR_NEXT_RUN_DATE, BR.BR_AUTO_REFRESH, BR.BR_EXPIRY, BR.BR_NAME,
BR.BR_WORKBOOK_NAME, FU.USER_NAME
--BQT.BQT_ID, BRR.BRR_ID, BR.BR_ID, BRR.BRR_CREATED_DATE, BRR.BRR_RUN_DATE, BRR.BRR_STATE,
--BQT.BQT_TABLE_NAME, BQT.BQT_CREATED_DATE, BR.BR_NAME, BR.BR_WORKBOOK_NAME,
BR.BR_EXPIRY, BR.BR_CREATED_BY, BR.BR_EU_ID,
--EU.EU_USERNAME, FU.USER_NAME, ASI.BQT_TABLE_SIZE_MB
FROM
disceul_us.eul5_bq_tables BQT
JOIN DISCEUL_US.EUL5_BR_RUNS BRR ON BQT.BQT_BRR_ID = BRR.BRR_ID
JOIN DISCEUL_US.EUL5_BATCH_REPORTS BR ON BRR.BRR_BR_ID = BR.BR_ID
JOIN DISCEUL_US.EUL5_EUL_USERS EU ON BR.BR_EU_ID = EU.EU_ID
LEFT JOIN APPS.FND_USER FU ON REPLACE(EU.EU_USERNAME,'#','') = FU.USER_ID
JOIN APPS_SEG_INFO ASI ON BQT.BQT_TABLE_NAME = ASI.SEGMENT_NAME
ORDER BY ASI.BQT_TABLE_SIZE_MB DESC
;
select MIN(CREATED) from dba_objects where owner = 'APPS' and object_name like '%EUL5_B16%R%'
Display date with hours/minutes information
Ref: https://asktom.oracle.com/pls/asktom/f?p=100:11:::::P11_QUESTION_ID:1660875645686
Find out expired reports
How to Cancel or Delete a Scheduled Discoverer Report When Running (Doc ID 429415.1)
BRR_STATE will have 9 allowable values from 1 to 9 and the description of each value are as
follows
9. Run all
@1.drop_eul5.sql
@2.eul5_bq_delete.sql
@3.eul5_bq_deps_delete.sql
@4.eul5_seg_delete.sql
@5.eul5_bq_delete.sql
@6.eul5_bp_delete.sql
@7.eul5_bs_delete.sql
@8.eul5_br_delete.sql
commit;
Find out DISCO user sessions
PROGRAM is an important column. Program shows dis51ws@machinename for both users and scheduled
workbooks with an addition at the end with the user workbook having (TNS- V1-V3). The scheduled
workbook shows (J000) appended to the program name.
Job ID to DISCO user mapping
select a.sid, a.job, b.br_name, a.THIS_DATE||' ' || a.THIS_SEC "Start time", FU.user_name from
dba_jobs_running a, disceul_us.eul5_batch_reports b, applsys.fnd_user fu
where a.job = b.br_job_id
AND replace(b.BR_CREATED_BY, '#','') = FU.USER_ID;
DISCO sessions running over 3 hours
SELECT * FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_14DAY
, blocks - LEAD(blocks,14,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_14DAY
, LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_21DAY
, blocks - LEAD(blocks,21,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_21DAY
, LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE)
BLOCKS_PREV_30DAY
, blocks - LEAD(blocks,30,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_30DAY
from monuser.v_gsi_dba_segments
-- WHERE SEGMENT_NAME = 'ABM_ACC_MAP_SUM_REP'
) T1
WHERE T1.rno = 1
AND blocks_diff_1day > 0 AND OWNER != 'SYS' ORDER BY blocks_diff_1day desc
;
SELECT * FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, INSERT_DATE - 1 DATE_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, INSERT_DATE - 2 DATE_2DAY
, LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_3DAY
, blocks - LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_3DAY
, INSERT_DATE - 3 DATE_3DAY
, LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_4DAY
, blocks - LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_4DAY
, INSERT_DATE - 4 DATE_4DAY
, LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_5DAY
, blocks - LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_5DAY
, INSERT_DATE - 5 DATE_5DAY
, LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_6DAY
, blocks - LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_6DAY
, INSERT_DATE - 6 DATE_6DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, INSERT_DATE - 7 DATE_7DAY
, LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_8DAY
, blocks - LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_8DAY
, INSERT_DATE - 8 DATE_8DAY
, LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC )
BLOCKS_PREV_9DAY
, blocks - LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_9DAY
, INSERT_DATE - 9 DATE_9DAY
, LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_10DAY
, blocks - LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_10DAY
, INSERT_DATE - 10 DATE_10DAY
, LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_11DAY
, blocks - LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_11DAY
, INSERT_DATE - 11 DATE_11DAY
MTL_MATERIAL_TRANSACTIONS_IDX2
,DATE_1DAY, BLOCKS_PREV_1DAY,
,DATE_1DAY, BLOCKS_PREV_1DAY
, CASE WHEN BLOCKS_PREV_1DAY = 0 THEN -1 ELSE BLOCKS_DIFF_1DAY END BLOCKS_DIFF_1DAY
, CASE WHEN BLOCKS_PREV_1DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_1DAY)*8/1024 END
growth_MB_1day
,DATE_2DAY, BLOCKS_PREV_2DAY
, CASE WHEN BLOCKS_PREV_2DAY = 0 THEN -1 ELSE BLOCKS_DIFF_2DAY END BLOCKS_DIFF_2DAY
, CASE WHEN BLOCKS_PREV_2DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_2DAY)*8/1024 END
growth_MB_2day
,DATE_3DAY, BLOCKS_PREV_3DAY
, CASE WHEN BLOCKS_PREV_3DAY = 0 THEN -1 ELSE BLOCKS_DIFF_3DAY END BLOCKS_DIFF_3DAY
, CASE WHEN BLOCKS_PREV_3DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_3DAY)*8/1024 END
growth_MB_3day
,DATE_4DAY, BLOCKS_PREV_4DAY
, CASE WHEN BLOCKS_PREV_4DAY = 0 THEN -1 ELSE BLOCKS_DIFF_4DAY END BLOCKS_DIFF_4DAY
, CASE WHEN BLOCKS_PREV_4DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_4DAY)*8/1024 END
growth_MB_4day
,DATE_5DAY, BLOCKS_PREV_5DAY
, CASE WHEN BLOCKS_PREV_5DAY = 0 THEN -1 ELSE BLOCKS_DIFF_5DAY END BLOCKS_DIFF_5DAY
, CASE WHEN BLOCKS_PREV_5DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_5DAY)*8/1024 END
growth_MB_5day
,DATE_6DAY, BLOCKS_PREV_6DAY
, CASE WHEN BLOCKS_PREV_6DAY = 0 THEN -1 ELSE BLOCKS_DIFF_6DAY END BLOCKS_DIFF_6DAY
, CASE WHEN BLOCKS_PREV_6DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_6DAY)*8/1024 END
growth_MB_6day
,DATE_7DAY, BLOCKS_PREV_7DAY
, CASE WHEN BLOCKS_PREV_7DAY = 0 THEN -1 ELSE BLOCKS_DIFF_7DAY END BLOCKS_DIFF_7DAY
, CASE WHEN BLOCKS_PREV_7DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_7DAY)*8/1024 END
growth_MB_7day
,DATE_14DAY, BLOCKS_PREV_14DAY
, CASE WHEN BLOCKS_PREV_14DAY = 0 THEN -1 ELSE BLOCKS_DIFF_14DAY END BLOCKS_DIFF_14DAY
, CASE WHEN BLOCKS_PREV_14DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_14DAY)*8/1024 END
growth_MB_14day
,DATE_21DAY, BLOCKS_PREV_21DAY
, CASE WHEN BLOCKS_PREV_21DAY = 0 THEN -1 ELSE BLOCKS_DIFF_21DAY END BLOCKS_DIFF_21DAY
, CASE WHEN BLOCKS_PREV_21DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_21DAY)*8/1024 END
growth_MB_21day
,DATE_30DAY, BLOCKS_PREV_30DAY
, CASE WHEN BLOCKS_PREV_30DAY = 0 THEN -1 ELSE BLOCKS_DIFF_30DAY END BLOCKS_DIFF_30DAY
, CASE WHEN BLOCKS_PREV_30DAY = 0 THEN -1 ELSE (BLOCKS - BLOCKS_PREV_30DAY)*8/1024 END
growth_MB_30day
FROM
(
select owner, segment_name, BLOCKS, INSERT_DATE,
row_number() over (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) rno
, LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_1DAY
, blocks - LEAD(blocks,1,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_1DAY
, INSERT_DATE - 1 DATE_1DAY
, LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_2DAY
, blocks - LEAD(blocks,2,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_2DAY
, INSERT_DATE - 2 DATE_2DAY
, LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_3DAY
, blocks - LEAD(blocks,3,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_3DAY
, INSERT_DATE - 3 DATE_3DAY
, LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_4DAY
, blocks - LEAD(blocks,4,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_4DAY
, INSERT_DATE - 4 DATE_4DAY
, LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_5DAY
, blocks - LEAD(blocks,5,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_5DAY
, INSERT_DATE - 5 DATE_5DAY
, LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_6DAY
, blocks - LEAD(blocks,6,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_6DAY
, INSERT_DATE - 6 DATE_6DAY
, LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_7DAY
, blocks - LEAD(blocks,7,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_7DAY
, INSERT_DATE - 7 DATE_7DAY
, LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_8DAY
, blocks - LEAD(blocks,8,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_8DAY
, INSERT_DATE - 8 DATE_8DAY
, LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC )
BLOCKS_PREV_9DAY
, blocks - LEAD(blocks,9,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC) AS
BLOCKS_DIFF_9DAY
, INSERT_DATE - 9 DATE_9DAY
, LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_10DAY
, blocks - LEAD(blocks,10,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_10DAY
, INSERT_DATE - 10 DATE_10DAY
, LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
BLOCKS_PREV_11DAY
, blocks - LEAD(blocks,11,0) OVER (partition by OWNER, SEGMENT_NAME order by INSERT_DATE DESC)
AS BLOCKS_DIFF_11DAY
, INSERT_DATE - 11 DATE_11DAY
SELECT
NVL(GB_TODAY,-1) GB_TODAY, NVL(GB_1day,-1) GB_1DAY, NVL(gb_2day,-1) GB_2DAY, NVL(gb_3day,-1)
GB_3DAY, NVL(gb_4day,-1) GB_4DAY, NVL(gb_5day,-1) GB_5DAY
, NVL(GB_6day,-1) GB_6day, NVL(GB_7day,-1) GB_7day, NVL(gb_8day,-1) GB_8day, NVL(gb_9day,-1)
GB_9day, NVL(gb_10day,-1) GB_10day
, NVL(gb_11day,-1) GB_11day, NVL(GB_12day,-1) GB_12day, NVL(GB_13day,-1) gb_13day,
NVL(gb_14day,-1) gb_14day, NVL(gb_15day,-1) gb_15day
, NVL(gb_16day,-1) gb_16day, NVL(GB_17day,-1) gb_17day, NVL(GB_18day,-1) gb_18day,
NVL(gb_19day,-1) gb_19day, NVL(gb_20day,-1) gb_20day
, NVL(gb_21day,-1) gb_21day, NVL(GB_22day,-1) gb_22day, NVL(GB_23day,-1) gb_23day,
NVL(gb_24day,-1) gb_24day, NVL(gb_25day,-1) gb_25day
, NVL(gb_26day,-1) gb_26day , NVL(GB_27day,-1) gb_27day, NVL(GB_28day,-1) gb_28day ,
NVL(gb_29day,-1) gb_29day, NVL(gb_30day,-1) gb_30day
, NVL(gb_60day,-1) gb_60day, NVL(gb_90day,-1) gb_90day
FROM
(
SELECT
(select USED_SPACE_GB from v_db_size_info where dname = 'GSIPROD' and to_char(INSERT_DATE,
'YYYYMMDD') = to_char(SYSDATE, 'YYYYMMDD')) GB_TODAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-1, 'YYYYMMDD')) GB_1DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-2, 'YYYYMMDD')) GB_2DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-3, 'YYYYMMDD')) GB_3DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-4, 'YYYYMMDD')) GB_4DAY
, (select NVL(USED_SPACE_GB,0) from v_db_size_info where dname = 'GSIPROD' and
to_char(INSERT_DATE, 'YYYYMMDD') = to_char(SYSDATE-5, 'YYYYMMDD')) GB_5DAY
The below query will give list of reports run between 2 dates, ordered by taken time:
SELECT fu.user_name
, fu.description
, q.qs_created_date run_date
, q.qs_doc_name report
, q.qs_doc_details sheet
, q.qs_act_elap_time elapsed_time_seconds
FROM disceul_us.eul5_qpp_stats q
JOIN applsys.fnd_user fu ON q.qs_created_by = '#' || fu.user_id
WHERE 1 = 1
AND q.qs_created_date > '16-AUG-2016' AND q.qs_created_date < '18-AUG-2016'
-- AND fu.user_name = 'SYSADMIN'
-- AND q.qs_doc_name IN ('My Test Report')
-- ORDER BY q.qs_created_date DESC;
ORDER BY q.qs_act_elap_time DESC;
Ref: https://oracle101.co.uk/discoverer/disco-reports-usage-submitted/
https://drdavetaylor.com/2015/06/16/discoverer-reporting-using-sql-to-generate-report-usage-statistics/
Read by other session
http://logicalread.solarwinds.com/oracle-read-by-other-session-wait-event/#.V7v9-Pl96M8
Column usage
begin
dbms_stats.seed_col_usage (null, null, 600);
end;
/
600: in seconds
select dbms_stats.create_extended_stats (null, ‘table name’, ‘(c1, c2, c3)’ ) from dual;
Ref: https://docs.oracle.com/cd/B28359_01/server.111/b28310/diag006.htm
The path to the ADR trace directory for the current instance is returned.
on usnca009:
cd /tmp
Objective is to run VNC clone. The below are steps done on CONV13 application node
[]-->vncserver :5
ssh usnca021
sudo su - conv13ora
RAC Technologies support matrix
http://www.oracle.com/technetwork/database/clustering/tech-generic-linux-new-086754.html
http://www.oracle.com/technology/products/database/clustering/index.html
Public (vs) private (vs) virtual
Public IP: The public IP address is for the server. This is the same as any
server IP address, a unique address with exists in /etc/hosts.
Private IP: Oracle RAC requires "private IP" addresses to manage the CRS, the
clusterware heartbeat process and the cache fusion layer.
Virtual IP: Oracle uses a Virtual IP (VIP) for database access. The VIP must
be on the same subnet as the public IP address. The VIP is used for RAC
failover (TAF).
Using coe_xfr_sql_profile.sql to force a good plan
Ref1: http://www.bobbydurrettdba.com/2013/07/10/using-coe_xfr_sql_profile-sql-to-force-a-good-plan/
Ref2: [prodora@usnca003a:/home/prodora/users/ravi/EDC]
sql_id hash_value of a query
SQL> select sql_id, hash_value from v$sql where sql_text = 'select * from dual';
SQL_ID HASH_VALUE
------------- ----------
a5ks9fhw2v9s1 942515969