This Script will tell you for which schema we need to run the gather statistics before going for an upgrade and also for better performances.
-- - - - - - - - - - - - - - - - Script begins here - - - - - - - - - - - - - - - -
-- NAME: CHECK_STALE_STATS.SQL
-- Executed as SYS as sysdba
-- PURPOSE:
-- This script is an automated way to deal with stale statistics operations that are
-- required to be done as part of manual upgrade OR when reported by DBUA.
--
-- This script will work in both Windows and Unix platforms from database
-- version 9.2 or higher.
--
SET FEEDBACK OFF
SET LINESIZE 250
SET SERVEROUTPUT ON
DECLARE
-- Variables declared
P_OTAB DBMS_STATS.OBJECTTAB;
MCOUNT NUMBER := 0;
P_VERSION VARCHAR2(10);
-- Cursor defined
CURSOR c1
IS
SELECT distinct schema
FROM dba_registry
ORDER by 1;
-- Beginning of the anonymous block
BEGIN
-- Verifying version from v$instance
SELECT version INTO p_version FROM v$instance;
DBMS_OUTPUT.PUT_LINE(chr(13));
-- Defining Loop 1 for listing schema which have stale stats
FOR x in c1
LOOP
DBMS_STATS.GATHER_SCHEMA_STATS(OWNNAME=>x.schema,OPTIONS=>'LIST AUTO',OBJLIST=>p_otab);
-- Defining Loop 2 to find number of objects containing stale stats
FOR i in 1 .. p_otab.count
LOOP
IF p_otab(i).objname NOT LIKE 'SYS_%'
AND p_otab(i).objname NOT IN ('CLU$','COL_USAGE$','FET$','INDPART$',
'MON_MODS$','TABPART$','HISTGRM$',
'MON_MODS_ALL$',
'HIST_HEAD$','IN $','TAB$',
'WRI$_OPTSTAT_OPR','PUIU$DATA',
'XDB$NLOCKS_CHILD_NAME_IDX',
'XDB$NLOCKS_PARENT_OID_IDX',
'XDB$NLOCKS_RAW OKEN_IDX', 'XDB$SCHEMA_URL',
'XDBHI_IDX', 'XDB_PK_H_LINK')
THEN
-- Incrementing count for each object found with statle stats
mcount := mcount + 1;
END IF;
-- End of Loop 2
END LOOP;
-- Displays no stale statistics, if coun is 0
IF mcount!=0
THEN
-- Displays Schema with stale stats if count is greater than 0
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------------------------------');
DBMS_OUTPUT.PUT_LINE('-- ' x.schema ' schema contains stale statistics use the following to gather the statistics ''--');
DBMS_OUTPUT.PUT_LINE('-------------------------------------------------------------------------------------------------------');
--DBMS_OUTPUT.PUT_LINE('');
-- Displays Command to be executed if schema with stale statistics is found depending on the version.
IF SUBSTR(p_version,1,5) in ('8.1.7','9.0.1','9.2.0')
THEN
DBMS_OUTPUT.PUT_LINE(chr(13));
DBMS_OUTPUT.PUT_LINE('EXEC DBMS_STATS.GATHER_SCHEMA_STATS('''x.schema''',OPTIONS=>''''GATHER'''', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, METHOD_OPT => ''''FOR ALL COLUMNS SIZE AUTO'''', CASCADE => TRUE);');
ELSIF SUBSTR(p_version,1,6) in ('10.1.0','10.2.0','11.1.0')
THEN
DBMS_OUTPUT.PUT_LINE(chr(13));
DBMS_OUTPUT.PUT_LINE('EXEC DBMS_STATS.GATHER_DICTIONARY_STATS('''x.schema''',OPTIONS=>''''GATHER'''', ESTIMATE_PERCENT => DBMS_STATS.AUTO_SAMPLE_SIZE, METHOD_OPT => ''''FOR ALL COLUMNS SIZE AUTO'''', CASCADE => TRUE);');
ELSE
DBMS_OUTPUT.PUT_LINE(chr(13));
DBMS_OUTPUT.PUT_LINE('Version is 'p_version);
END IF;
ELSE
DBMS_OUTPUT.PUT_LINE('-- There are no stale statistics in ' x.schema ' schema.');
DBMS_OUTPUT.PUT_LINE(chr(13));
END IF;
-- Reset count to 0.
mcount := 0;
-- End of Loop 1
END LOOP;
END;
/
SET FEEDBACK ON
-- - - - - - - - - - - - - - - - Script ends here - - - - - - - - - - - - - - - -
Note:
If it is a 9i and previous then it will say to run gather schema stats . If it is 10g and above it will tell you to run gather dictionary stats. and Usually it will always says to run SYS schema. But it is ok if you run once. But if it is suggested for some other schema other then sys then it should be ran till it is not stale.
Showing posts with label Tuning and Performance. Show all posts
Showing posts with label Tuning and Performance. Show all posts
Friday, December 26, 2008
Wednesday, December 10, 2008
Document for exporting existing STATSISTICS before Gathering Statistics:
Metalink Reference:
Complete Checklist for Manual Upgrades to 10gR2
Note:316889.1
This Document is to export the existing statistics before gathering statistics. So that if we no need for the latest statistics we can delete the existing stats and import the old statistics.
This is Recommended for 4 users to run stats while Upgrading for 10g R1 to R2 . They are as follows:
Gather schema stats analyse:
======================
1. SYS
2. ODM
3. OLAPSYS
4. MDSYS
Step 1: Now take backup of existing stats for the following four users :
====================================================
1.sqlplus /as sysdba
2.spool sdict
3. grant analyze any to sys;
4.exec dbms_stats.create_stat_table('SYS','dictstattab')
Step 2: Now run the as the stats table package is created now we are going to export the existing stats for the four users by:
==================================================================================================
exec dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS')
exec dbms_stats.export_schema_stats('ODM','dictstattab',statown => 'SYS')
exec dbms_stats.export_schema_stats('OLAPSYS','dictstattab',statown => 'SYS')
exec dbms_stats.export_schema_stats('MDSYS','dictstattab',statown => 'SYS')
Step 3: To run the Gather statistics :
============================
1. sqlplus '/as sysdba'
2. spool gdict
3. grant analyze any to sys;
exec dbms_stats.gather_schema_stats('ODM',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('OLAPSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('MDSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
spool off;
step 4 : OPTIONAL (ONLY WHEN WE NEED THE OLD STATISTICS
===================================================
If we want to have this existing statistics (ie) old statistics which we have before running the above gather stats , We need to do the below steps:
1. Delete the analysed schema by using :
exec dbms_stats.delete_schema_stats('SYS')
2. Import the above exported stats for that four users by
exec dbms_stats.import_schema_stats('SYS','dictstattab');
Complete Checklist for Manual Upgrades to 10gR2
Note:316889.1
This Document is to export the existing statistics before gathering statistics. So that if we no need for the latest statistics we can delete the existing stats and import the old statistics.
This is Recommended for 4 users to run stats while Upgrading for 10g R1 to R2 . They are as follows:
Gather schema stats analyse:
======================
1. SYS
2. ODM
3. OLAPSYS
4. MDSYS
Step 1: Now take backup of existing stats for the following four users :
====================================================
1.sqlplus /as sysdba
2.spool sdict
3. grant analyze any to sys;
4.exec dbms_stats.create_stat_table('SYS','dictstattab')
Step 2: Now run the as the stats table package is created now we are going to export the existing stats for the four users by:
==================================================================================================
exec dbms_stats.export_schema_stats('SYS','dictstattab',statown => 'SYS')
exec dbms_stats.export_schema_stats('ODM','dictstattab',statown => 'SYS')
exec dbms_stats.export_schema_stats('OLAPSYS','dictstattab',statown => 'SYS')
exec dbms_stats.export_schema_stats('MDSYS','dictstattab',statown => 'SYS')
Step 3: To run the Gather statistics :
============================
1. sqlplus '/as sysdba'
2. spool gdict
3. grant analyze any to sys;
exec dbms_stats.gather_schema_stats('ODM',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('OLAPSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('MDSYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
exec dbms_stats.gather_schema_stats('SYS',options=>'GATHER',estimate_percent => DBMS_STATS.AUTO_SAMPLE_SIZE, method_opt => 'FOR ALL COLUMNS SIZE AUTO', cascade => TRUE);
spool off;
step 4 : OPTIONAL (ONLY WHEN WE NEED THE OLD STATISTICS
===================================================
If we want to have this existing statistics (ie) old statistics which we have before running the above gather stats , We need to do the below steps:
1. Delete the analysed schema by using :
exec dbms_stats.delete_schema_stats('SYS')
2. Import the above exported stats for that four users by
exec dbms_stats.import_schema_stats('SYS','dictstattab');
Saturday, October 18, 2008
User Validation in apps
This is the package to check apps user is valid or not
select fnd_web_sec.validate_login('GUEST','ORACLE') from dual;
Note :
Put is query as a applmgr user as a apps user and check whether it is valid or not . And also check the .dbc file in fnd/sercure and check the APPLCSF/log for errors.
select fnd_web_sec.validate_login('GUEST','ORACLE') from dual;
Note :
Put is query as a applmgr user as a apps user and check whether it is valid or not . And also check the .dbc file in fnd/sercure and check the APPLCSF/log for errors.
Purging Activities
Purging Activities for logs:
Total space to check:
/u01/orasoft/VISappl/ad/11.5.0>df -h
Over all check should be done for the following :
Directoried Before Purge After Purge
Visdb 63 G 7.6 G
Visora 6 G 6 G
Visappl 20 G 20 G
Viscomn 83 G 40 G
Checking To be done in Database level.
1. Check for the alert log files, bdump.udump
2. Navigate to $ORACLE_HOME/admin/$CONTEXT_NAME/
3. du –h * -> to see all the files size inside that.
4. To check Particular dir . Go inside that directory and check by du –h .
5. Now check udump trace files count and from when it was there , by
6. ls –lrt wc –l
7. ls –lrtmore
8. Output :
total 74927228
-rw-r--r-- 1 applmgr dba 73 Feb 1 2007 Errors0.log
-rw-r--r-- 1 applmgr dba 6489 Feb 1 2007 02010901.log
-rw-r--r-- 1 applmgr dba 1698 Feb 1 2007 l7825627.req
-rw-r--r-- 1 applmgr dba 2058 Feb 1 2007 l7816585.req
-rw-r--r-- 1 applmgr dba 1608 Feb 1 2007 l7830923.req
-rw-r--r-- 1 applmgr dba 1455 Feb 1 2007 l7830681.req
-rw-r--r-- 1 applmgr dba 1368 Feb 1 2007 l7830682.req
Note : Login as Oracle User.
Checking To be done in Apps Level :
1. Check the $APPLCSF/log
2. To Purge $APPLCSF/log
3. Login as Applmgr user
4. Manually use the command for purge the logs which are older than 30 days.
5. /u01/orasoft/Viscomn/admin/log>find . -mtime +30 -exec rm {} \;
Note : In Front end Puge concurrent Request is scheduled so it clears from FND table. But physically it will be there, so use the above steps to remove.This 30 means , It will keep for 30 days logs and older then that will be purged.
2. Check for $APPLCSF/out
$APPLCSF/out start from date:
/u01/ VIScomn/admin/out/VIS_VIS_sys38>ls -lrt more
total 8025832
-rw-r--r-- 1 applmgr dba 852 Jan 23 2008 o8632542.out
-rw-r--r-- 1 applmgr dba 2052 Jan 23 2008 o8632020.out
-rw-r--r-- 1 applmgr dba 0 Jan 23 2008 o8631500.out
3. Ad_logs starts from:
/u01/ VISappl/admin/VIS_sys38/log>ls -lrt more
total 736776
-rw-r--r-- 1 applmgr dba 5900192 Feb 15 2006 adaimgr_08102005_00.log
-rw-r--r-- 1 applmgr dba 21273464 Feb 15 2006 adwork002.log
-rw-r--r-- 1 applmgr dba 32952381 Feb 15 2006 adwork001.log
-rw-r--r-- 1 applmgr dba 19349015 Feb 15 2006 adwork003.log
Now Purge this as required.
Total space to check:
/u01/orasoft/VISappl/ad/11.5.0>df -h
Over all check should be done for the following :
Directoried Before Purge After Purge
Visdb 63 G 7.6 G
Visora 6 G 6 G
Visappl 20 G 20 G
Viscomn 83 G 40 G
Checking To be done in Database level.
1. Check for the alert log files, bdump.udump
2. Navigate to $ORACLE_HOME/admin/$CONTEXT_NAME/
3. du –h * -> to see all the files size inside that.
4. To check Particular dir . Go inside that directory and check by du –h .
5. Now check udump trace files count and from when it was there , by
6. ls –lrt wc –l
7. ls –lrtmore
8. Output :
total 74927228
-rw-r--r-- 1 applmgr dba 73 Feb 1 2007 Errors0.log
-rw-r--r-- 1 applmgr dba 6489 Feb 1 2007 02010901.log
-rw-r--r-- 1 applmgr dba 1698 Feb 1 2007 l7825627.req
-rw-r--r-- 1 applmgr dba 2058 Feb 1 2007 l7816585.req
-rw-r--r-- 1 applmgr dba 1608 Feb 1 2007 l7830923.req
-rw-r--r-- 1 applmgr dba 1455 Feb 1 2007 l7830681.req
-rw-r--r-- 1 applmgr dba 1368 Feb 1 2007 l7830682.req
Note : Login as Oracle User.
Checking To be done in Apps Level :
1. Check the $APPLCSF/log
2. To Purge $APPLCSF/log
3. Login as Applmgr user
4. Manually use the command for purge the logs which are older than 30 days.
5. /u01/orasoft/Viscomn/admin/log>find . -mtime +30 -exec rm {} \;
Note : In Front end Puge concurrent Request is scheduled so it clears from FND table. But physically it will be there, so use the above steps to remove.This 30 means , It will keep for 30 days logs and older then that will be purged.
2. Check for $APPLCSF/out
$APPLCSF/out start from date:
/u01/ VIScomn/admin/out/VIS_VIS_sys38>ls -lrt more
total 8025832
-rw-r--r-- 1 applmgr dba 852 Jan 23 2008 o8632542.out
-rw-r--r-- 1 applmgr dba 2052 Jan 23 2008 o8632020.out
-rw-r--r-- 1 applmgr dba 0 Jan 23 2008 o8631500.out
3. Ad_logs starts from:
/u01/ VISappl/admin/VIS_sys38/log>ls -lrt more
total 736776
-rw-r--r-- 1 applmgr dba 5900192 Feb 15 2006 adaimgr_08102005_00.log
-rw-r--r-- 1 applmgr dba 21273464 Feb 15 2006 adwork002.log
-rw-r--r-- 1 applmgr dba 32952381 Feb 15 2006 adwork001.log
-rw-r--r-- 1 applmgr dba 19349015 Feb 15 2006 adwork003.log
Now Purge this as required.
Thursday, June 19, 2008
Sql Tuning and Analysis
The Problem Scenario...
At a client site, a concurrent program "Import Items" took 4 and half hours on an average to complete (for loading 1000 items), this was not acceptable as the client had to load 45,000 items for 11 organizations so, 11* 45,000 items would mean, for one orgnization it would take 202.5 hours so for 11 organizations it would take them 2227.5 hours which is around 93 days assuming it runs all through the day and night.
Environment Details
Apps Version : 11.5.10.2
Database Version : 10.2.0.3
OS version : RHEL AS 4 update 6
RAM size : 8 GB
No.of Processors : 1 processor
SGA size : 2 GB
From preliminary investigation using trace in oracle applications 11i, that the maximum time (about 3 hrs) was taken up by the following SQL itself for 1000 items to be imported.
MERGE INTO eni_oltp_item_star STAR
USING (SELECT item.inventory_item_id inventory_item_id,
item.organization_id organization_id,
item.CONCATENATED_SEGMENTS|| ' (' || mtp.organization_code || ')'value,
decode(item.organization_id,mtp.master_organization_id,null,
item.inventory_item_id || '-' || mtp.master_organization_id)
master_id,
nvl(mic.category_id,-1) inv_category_id,
nvl(mic.category_set_id, :l_inv_category_set) inv_category_set_id,
nvl(kfv.concatenated_segments,'Unassigned') inv_concat_seg,
nvl(mic1.category_id,-1) vbh_category_id,
nvl(mic1.category_set_id, :l_vbh_category_set) vbh_category_set_id,
nvl(kfv1.concatenated_segments,'Unassigned') vbh_concat_seg,
nvl(item.item_catalog_group_id,-1) item_catalog_group_id,
item.primary_uom_code,
item.creation_date,
item.last_update_date
FROM mtl_system_items_interface interface,
mtl_system_items_b_kfv item,
mtl_parameters mtp,
mtl_item_categories mic ,
mtl_categories_b_kfv kfv ,
mtl_item_categories mic1,
mtl_categories_b_kfv kfv1
WHERE item.inventory_item_id = interface.inventory_item_id
-- AND interface.set_process_id = p_set_process_id
AND interface.set_process_id in
(:p_set_process_id, :p_set_process_id +1000000000000)
AND interface.process_flag = 7
AND item.organization_id = interface.organization_id
AND item.organization_id= mtp.organization_id
AND mic.organization_id(+) = item.organization_id
AND mic.inventory_item_id(+) = item.inventory_item_id
AND mic.category_id = kfv.category_id (+)
and mic.category_set_id(+) = :l_inv_category_set
AND mic1.organization_id(+) = item.organization_id
AND mic1.inventory_item_id(+) = item.inventory_item_id
AND mic1.category_id = kfv1.category_id (+)
and mic1.category_set_id(+) = :l_vbh_category_set) mti
ON (STAR.inventory_item_id = mti.inventory_item_id
AND STAR.organization_id = mti.organization_id)
WHEN MATCHED THEN
UPDATE SET STAR.value = mti.value,
STAR.inv_category_id = mti.inv_category_id,
STAR.inv_category_set_id = mti.inv_category_set_id,
STAR.inv_concat_seg = mti.inv_concat_seg,
STAR.vbh_category_id = mti.vbh_category_id,
STAR.vbh_category_set_id = mti.vbh_category_set_id,
STAR.vbh_concat_seg = mti.vbh_concat_seg,
STAR.master_id = mti.master_id,
STAR.item_catalog_group_id = mti.item_catalog_group_id,
STAR.primary_uom_code = mti.primary_uom_code,
STAR.last_update_date = mti.last_update_date
WHEN NOT MATCHED THEN
INSERT (
id,
value,
inventory_item_id,
organization_id,
inv_category_id,
inv_category_set_id,
inv_concat_seg,
vbh_category_id,
vbh_category_set_id,
vbh_concat_seg,
master_id,
item_catalog_group_id,
primary_uom_code,
creation_date,
last_update_date)
VALUES(
mti.inventory_item_id || '-' || mti.organization_id,
mti.value,
mti.inventory_item_id,
mti.organization_id,
mti.inv_category_id,
mti.inv_category_set_id,
mti.inv_concat_seg,
mti.vbh_category_id,
mti.vbh_category_set_id,
mti.vbh_concat_seg,
mti.master_id,
mti.item_catalog_group_id,
mti.primary_uom_code,
mti.creation_date,
mti.last_update_date)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 559.14 546.14 0 315621488 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 559.14 546.14 0 315621488 0 0
Evaluating options
The first thing which we have done is to look into the optimizer, It is CHOOSE based. used Index and its fine.The Second thing we saw is that what are all the tables included in this Issue and when it is analyzed.
For example
select LAST_ANALYZED from dba_tables where TABLE_NAME like 'eni.eni_oltp_item_star';
This was last analyzed on 2005
Now Analyzed the table by
ANALYZE TABLE eni.eni_oltp_item_star COMPUTE STATISTICS;
The Improvement
===============
The same query now executed under 1.4 hrs to update 1000 records . Though it is faster this would still mean lot of time to import 45000 items for 11 organizations, so we further looked into the issue using the tracefile / tkprof output.
Using the tkprof outputfile we found out what SQL's are consuming time. The quickest way to look through a large tkprof output file is to grep the lines start with "total" and sort i based on the 3 or 4th column so that sql's that take time can be easily located instead of having to look at every page of the tkprof output.
The command used to do the above activity is
$grep "^total" tkrpofoutputfile | sort -n +2
The above command will give you an output as shown below
total 2001 143.68 140.33 0 58128000 0 1000
total 2001 145.98 142.54 0 58128000 0 1000
total 2001 147.57 144.11 0 58128000 0 1000
total 2001 158.26 154.56 0 58128000 0 1000
total 2001 160.53 156.76 0 58128000 0 1000
total 2001 163.98 160.14 0 58128000 0 1000
total 2001 183.42 179.16 0 58128000 0 1000
total 2001 185.27 180.95 0 58128000 0 1000
total 2001 187.72 183.39 0 58128000 0 1000
total 2001 190.38 185.93 0 58128000 0 1000
total 2001 191.76 187.31 0 58128000 0 1000
total 2001 192.59 188.12 0 58128000 0 1000
total 2001 193.99 189.47 0 58128000 0 1000
total 2001 195.28 190.75 0 58128000 0 1000
total 2001 197.38 192.77 0 58128000 0 1000
total 2001 203.64 198.90 0 58128000 0 1000
total 2001 206.08 201.31 0 58128000 0 1000
total 2001 208.57 203.69 0 58128000 0 1000
total 2001 219.98 214.88 0 58128000 0 1000
total 2001 222.24 217.06 0 58128000 0 1000
total 2001 275.63 269.23 0 106943000 0 1000
total 271846 6074.20 5933.56 18800 2191690345 232725 186675
total 2 6088.02 5947.58 18800 2191686839 232550 1
total 198 6088.49 5948.03 18800 2191687323 232572 132
Note : Here There is no need to take care of last 2 rows which consists of total recursive and Non-recursive sql's time spent. our intension is to find which individual sql that takes long time to execute.
The improved plan was as follows:
Now Again started analyzing the tables which are all included in this scenario. while the concurrent request is running we took the request id and from that found the session_id using the table FND_CONCURRENT_REQUESTS and from that found the hash _value and from v$sqltext we found the sql involved in it
The query is as follows:
select a.sql_text from v$sqltext a,v$session b,apps.fnd_concurrent_requests c
where c.request_id=&request_id
and c.oracle_session_id=b.audsid
and c.os_process_id=b.process
and b.sql_hash_value=a.hash_value
and b.sql_address=a.address
Here give the Request id for which we are searching for, So that we can get the sql which ran for that particular request. Which is as follows.
SELECT INVENTORY_ITEM_ID FROM MTL_SYSTEM_ITEMS MSI WHERE MSI.INVENTORY_ITEM_ID = :B19 AND MSI.ORGANIZATION_ID = :B18 AND DECODE(:B17 ,:B2 ,NVL(:B16 ,-1),NVL(MSI.RELEASE_TIME_FENCE_CODE,-1))=NVL(MSI.RELEASE_TIME_FENCE_CODE,-1) AND DECODE(:B15 ,:B2 ,NVL(:B14 ,-1),NVL(MSI.RELEASE_TIME_FENCE_DAYS,-1))=NVL(MSI.RELEASE_TIME_FENCE_DAYS,-1) AND DECODE(:B13 ,:B2 ,NVL(:B12 ,-1),NVL(MSI.CONTAINER_ITEM_FLAG,-1))=NVL(MSI.CONTAINER_ITEM_FLAG,-1) AND DECODE(:B11 ,:B2 ,NVL(:B10 ,-1),NVL(MSI.CONTAINER_TYPE_CODE,-1))=NVL(MSI.CONTAINER_TYPE_CODE,-1) AND DECODE(:B9 ,:B2 ,NVL(:B8 ,-1),NVL(MSI.INTERNAL_VOLUME,-1))=NVL(MSI.INTERNAL_VOLUME,-1) AND DECODE(:B7 ,:B2 ,NVL(:B6 ,-1),NVL(MSI.MAXIMUM_LOAD_WEIGHT,-1))=NVL(MSI.MAXIMUM_LOAD_WEIGHT,-1) AND DECODE(:B5 ,:B2 ,NVL(:B4 ,-1),NVL(MSI.MINIMUM_FILL_PERCENT,-1))=NVL(MSI.MINIMUM_FILL_PERCENT,-1) AND DECODE(:B3 ,:B2 ,NVL(:B1 ,-1),NVL(MSI.VEHICLE_ITEM_FLAG,-1))=NVL(MSI.VEHICLE_ITEM_FLAG,-1)
And Now we found that, most of the time is spent in the above sql and when we searched in the Tkprof file from the udump by using the filter the “total” ,We searched where it took more time that is the above sql and it was related to the table inv.mtl_system_items_b.
Note: Here MTL_SYSTEM_ITEMS is a SYNONYM and it points to the table inv.mtl_system_items_b. so we saw that table and we came to know that it was also not analyzed. So the table has been analyzed using the following command:
ANALYZE TABLE inv.mtl_system_items_b COMPUTE STATISTICS;
Now the same "Import Item Program" was executed and it hardly takes 5 mins for 1000 Records to import.
Later we also tested this with 15000 items, the Import item only took about 1 hour.
So now this customer can import 45000 items for one organization in 3 hours, so for 11 organizations they could finish this in around 33 hours even if they do this sequentially. (they will in reality do this in parallel and finish this within one day)
Conclusion
It may be required to run the Gather Schema Statistics for all schemas regularly atleast once every week, So that all the objects will be analysed. If all those tables are analysed the performance can be improved.
The analysis also included monitoring their hardware resources like cpu / memory / i-o, but none of these were an issue in this scenario. The cpu was only 25% used, SGA was only 50% used, and there was no swapping
Note : The Trace should be enabled in concurrent -> programs -> enable Trace option.
At a client site, a concurrent program "Import Items" took 4 and half hours on an average to complete (for loading 1000 items), this was not acceptable as the client had to load 45,000 items for 11 organizations so, 11* 45,000 items would mean, for one orgnization it would take 202.5 hours so for 11 organizations it would take them 2227.5 hours which is around 93 days assuming it runs all through the day and night.
Environment Details
Apps Version : 11.5.10.2
Database Version : 10.2.0.3
OS version : RHEL AS 4 update 6
RAM size : 8 GB
No.of Processors : 1 processor
SGA size : 2 GB
From preliminary investigation using trace in oracle applications 11i, that the maximum time (about 3 hrs) was taken up by the following SQL itself for 1000 items to be imported.
MERGE INTO eni_oltp_item_star STAR
USING (SELECT item.inventory_item_id inventory_item_id,
item.organization_id organization_id,
item.CONCATENATED_SEGMENTS|| ' (' || mtp.organization_code || ')'value,
decode(item.organization_id,mtp.master_organization_id,null,
item.inventory_item_id || '-' || mtp.master_organization_id)
master_id,
nvl(mic.category_id,-1) inv_category_id,
nvl(mic.category_set_id, :l_inv_category_set) inv_category_set_id,
nvl(kfv.concatenated_segments,'Unassigned') inv_concat_seg,
nvl(mic1.category_id,-1) vbh_category_id,
nvl(mic1.category_set_id, :l_vbh_category_set) vbh_category_set_id,
nvl(kfv1.concatenated_segments,'Unassigned') vbh_concat_seg,
nvl(item.item_catalog_group_id,-1) item_catalog_group_id,
item.primary_uom_code,
item.creation_date,
item.last_update_date
FROM mtl_system_items_interface interface,
mtl_system_items_b_kfv item,
mtl_parameters mtp,
mtl_item_categories mic ,
mtl_categories_b_kfv kfv ,
mtl_item_categories mic1,
mtl_categories_b_kfv kfv1
WHERE item.inventory_item_id = interface.inventory_item_id
-- AND interface.set_process_id = p_set_process_id
AND interface.set_process_id in
(:p_set_process_id, :p_set_process_id +1000000000000)
AND interface.process_flag = 7
AND item.organization_id = interface.organization_id
AND item.organization_id= mtp.organization_id
AND mic.organization_id(+) = item.organization_id
AND mic.inventory_item_id(+) = item.inventory_item_id
AND mic.category_id = kfv.category_id (+)
and mic.category_set_id(+) = :l_inv_category_set
AND mic1.organization_id(+) = item.organization_id
AND mic1.inventory_item_id(+) = item.inventory_item_id
AND mic1.category_id = kfv1.category_id (+)
and mic1.category_set_id(+) = :l_vbh_category_set) mti
ON (STAR.inventory_item_id = mti.inventory_item_id
AND STAR.organization_id = mti.organization_id)
WHEN MATCHED THEN
UPDATE SET STAR.value = mti.value,
STAR.inv_category_id = mti.inv_category_id,
STAR.inv_category_set_id = mti.inv_category_set_id,
STAR.inv_concat_seg = mti.inv_concat_seg,
STAR.vbh_category_id = mti.vbh_category_id,
STAR.vbh_category_set_id = mti.vbh_category_set_id,
STAR.vbh_concat_seg = mti.vbh_concat_seg,
STAR.master_id = mti.master_id,
STAR.item_catalog_group_id = mti.item_catalog_group_id,
STAR.primary_uom_code = mti.primary_uom_code,
STAR.last_update_date = mti.last_update_date
WHEN NOT MATCHED THEN
INSERT (
id,
value,
inventory_item_id,
organization_id,
inv_category_id,
inv_category_set_id,
inv_concat_seg,
vbh_category_id,
vbh_category_set_id,
vbh_concat_seg,
master_id,
item_catalog_group_id,
primary_uom_code,
creation_date,
last_update_date)
VALUES(
mti.inventory_item_id || '-' || mti.organization_id,
mti.value,
mti.inventory_item_id,
mti.organization_id,
mti.inv_category_id,
mti.inv_category_set_id,
mti.inv_concat_seg,
mti.vbh_category_id,
mti.vbh_category_set_id,
mti.vbh_concat_seg,
mti.master_id,
mti.item_catalog_group_id,
mti.primary_uom_code,
mti.creation_date,
mti.last_update_date)
call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.00 0.00 0 0 0 0
Execute 1 559.14 546.14 0 315621488 0 0
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 559.14 546.14 0 315621488 0 0
Evaluating options
The first thing which we have done is to look into the optimizer, It is CHOOSE based. used Index and its fine.The Second thing we saw is that what are all the tables included in this Issue and when it is analyzed.
For example
select LAST_ANALYZED from dba_tables where TABLE_NAME like 'eni.eni_oltp_item_star';
This was last analyzed on 2005
Now Analyzed the table by
ANALYZE TABLE eni.eni_oltp_item_star COMPUTE STATISTICS;
The Improvement
===============
The same query now executed under 1.4 hrs to update 1000 records . Though it is faster this would still mean lot of time to import 45000 items for 11 organizations, so we further looked into the issue using the tracefile / tkprof output.
Using the tkprof outputfile we found out what SQL's are consuming time. The quickest way to look through a large tkprof output file is to grep the lines start with "total" and sort i based on the 3 or 4th column so that sql's that take time can be easily located instead of having to look at every page of the tkprof output.
The command used to do the above activity is
$grep "^total" tkrpofoutputfile | sort -n +2
The above command will give you an output as shown below
total 2001 143.68 140.33 0 58128000 0 1000
total 2001 145.98 142.54 0 58128000 0 1000
total 2001 147.57 144.11 0 58128000 0 1000
total 2001 158.26 154.56 0 58128000 0 1000
total 2001 160.53 156.76 0 58128000 0 1000
total 2001 163.98 160.14 0 58128000 0 1000
total 2001 183.42 179.16 0 58128000 0 1000
total 2001 185.27 180.95 0 58128000 0 1000
total 2001 187.72 183.39 0 58128000 0 1000
total 2001 190.38 185.93 0 58128000 0 1000
total 2001 191.76 187.31 0 58128000 0 1000
total 2001 192.59 188.12 0 58128000 0 1000
total 2001 193.99 189.47 0 58128000 0 1000
total 2001 195.28 190.75 0 58128000 0 1000
total 2001 197.38 192.77 0 58128000 0 1000
total 2001 203.64 198.90 0 58128000 0 1000
total 2001 206.08 201.31 0 58128000 0 1000
total 2001 208.57 203.69 0 58128000 0 1000
total 2001 219.98 214.88 0 58128000 0 1000
total 2001 222.24 217.06 0 58128000 0 1000
total 2001 275.63 269.23 0 106943000 0 1000
total 271846 6074.20 5933.56 18800 2191690345 232725 186675
total 2 6088.02 5947.58 18800 2191686839 232550 1
total 198 6088.49 5948.03 18800 2191687323 232572 132
Note : Here There is no need to take care of last 2 rows which consists of total recursive and Non-recursive sql's time spent. our intension is to find which individual sql that takes long time to execute.
The improved plan was as follows:
Now Again started analyzing the tables which are all included in this scenario. while the concurrent request is running we took the request id and from that found the session_id using the table FND_CONCURRENT_REQUESTS and from that found the hash _value and from v$sqltext we found the sql involved in it
The query is as follows:
select a.sql_text from v$sqltext a,v$session b,apps.fnd_concurrent_requests c
where c.request_id=&request_id
and c.oracle_session_id=b.audsid
and c.os_process_id=b.process
and b.sql_hash_value=a.hash_value
and b.sql_address=a.address
Here give the Request id for which we are searching for, So that we can get the sql which ran for that particular request. Which is as follows.
SELECT INVENTORY_ITEM_ID FROM MTL_SYSTEM_ITEMS MSI WHERE MSI.INVENTORY_ITEM_ID = :B19 AND MSI.ORGANIZATION_ID = :B18 AND DECODE(:B17 ,:B2 ,NVL(:B16 ,-1),NVL(MSI.RELEASE_TIME_FENCE_CODE,-1))=NVL(MSI.RELEASE_TIME_FENCE_CODE,-1) AND DECODE(:B15 ,:B2 ,NVL(:B14 ,-1),NVL(MSI.RELEASE_TIME_FENCE_DAYS,-1))=NVL(MSI.RELEASE_TIME_FENCE_DAYS,-1) AND DECODE(:B13 ,:B2 ,NVL(:B12 ,-1),NVL(MSI.CONTAINER_ITEM_FLAG,-1))=NVL(MSI.CONTAINER_ITEM_FLAG,-1) AND DECODE(:B11 ,:B2 ,NVL(:B10 ,-1),NVL(MSI.CONTAINER_TYPE_CODE,-1))=NVL(MSI.CONTAINER_TYPE_CODE,-1) AND DECODE(:B9 ,:B2 ,NVL(:B8 ,-1),NVL(MSI.INTERNAL_VOLUME,-1))=NVL(MSI.INTERNAL_VOLUME,-1) AND DECODE(:B7 ,:B2 ,NVL(:B6 ,-1),NVL(MSI.MAXIMUM_LOAD_WEIGHT,-1))=NVL(MSI.MAXIMUM_LOAD_WEIGHT,-1) AND DECODE(:B5 ,:B2 ,NVL(:B4 ,-1),NVL(MSI.MINIMUM_FILL_PERCENT,-1))=NVL(MSI.MINIMUM_FILL_PERCENT,-1) AND DECODE(:B3 ,:B2 ,NVL(:B1 ,-1),NVL(MSI.VEHICLE_ITEM_FLAG,-1))=NVL(MSI.VEHICLE_ITEM_FLAG,-1)
And Now we found that, most of the time is spent in the above sql and when we searched in the Tkprof file from the udump by using the filter the “total” ,We searched where it took more time that is the above sql and it was related to the table inv.mtl_system_items_b.
Note: Here MTL_SYSTEM_ITEMS is a SYNONYM and it points to the table inv.mtl_system_items_b. so we saw that table and we came to know that it was also not analyzed. So the table has been analyzed using the following command:
ANALYZE TABLE inv.mtl_system_items_b COMPUTE STATISTICS;
Now the same "Import Item Program" was executed and it hardly takes 5 mins for 1000 Records to import.
Later we also tested this with 15000 items, the Import item only took about 1 hour.
So now this customer can import 45000 items for one organization in 3 hours, so for 11 organizations they could finish this in around 33 hours even if they do this sequentially. (they will in reality do this in parallel and finish this within one day)
Conclusion
It may be required to run the Gather Schema Statistics for all schemas regularly atleast once every week, So that all the objects will be analysed. If all those tables are analysed the performance can be improved.
The analysis also included monitoring their hardware resources like cpu / memory / i-o, but none of these were an issue in this scenario. The cpu was only 25% used, SGA was only 50% used, and there was no swapping
Note : The Trace should be enabled in concurrent -> programs -> enable Trace option.
Subscribe to:
Posts (Atom)