QUESTIONS AND ANSWERS
What are the Focus areas in an AWR report?
Automatic workload repository (AWR) is a collection of persistent system performance statistics owned by SYS. It resides in the SYSAUX tablespace. By default snapshots are generated once every 60 minutes and maintained for 7 days.
How to get the trace information for a custom program?
Go to the System Administrator/Application Developer Responsibility -> Concurrent Program -> Define form Query for Concurrent Program -> Enable Trace. (OR)
One can enable trace when you submit the concurrent request -> Submit a New Request -> Click on Debug options -> Enable trace.
After enabling trace, reproduce the issue. One will be able to get the trace file in the udump location. Collect the trace file and get the tkprof to understand which query takes time if it is rdf report run the query and do the tuning using the execution plan.
After enabling trace, reproduce the issue. One will be able to get the trace file in the udump location. Collect the trace file and get the tkprof to understand which query takes time if it is rdf report run the query and do the tuning using the execution plan.
If the cache size is 10 and out of them 9 have completed and 1 is running long, in the next cycle will it cache 10 more or will it take up 9 more as one is already running?
Cache size is like a buffer, Only the buffer gets empty it will look for another set of request list, So when one has a cache size set as 10, it always picks 10 requests at a time.
Even if one gives a higher priority to a new request, that new request must wait until the buffer is empty and the manager returns to look at the requests list. That request may have to wait a long time if one sets the buffer size to a high number.
Can one specify the default trace location?
The trace file will be located in the udump directory. Or make use of the following query:
select name, value from v$parameter where name = 'user_dump_dest';
See: How To Trace a Concurrent Request And Generate TKPROF File (Document 453527.1)
When running 'Gather Schema Statistics' setting higher the estimate percent % will give better performance?
Yes, For better performance, the estimate percent must be higher. Especially for huge schemas like APPLSYS, first size month should run with 99%.
Why the requests are not getting killed from the backend once cancelled from the Frontend?
When cancelling requests from the front end, the fnd_concurrent_requests table would not have been updated as terminated for that request properly. Hence, it is still showing as running at the database and Operating System level.
In this case it is recommended to use the following sql:
Connect as SQL*Plus apps/ and execute:
SQL>update FND_CONCURRENT_REQUESTS
set status_code='X',
phase_code='C'
where request_id=;
SQL>commit;
Replace by appropriate request corresponding to request number having incorrect status.
Please share any note that gives the details of the things that happen in background when running a concurrent request?
One can enable a trace when submitting the concurrent request -> Submit a New Request -> Click on Debug options -> Enable trace.
One can choose the trace option which suites for the troubleshooting. That gives the information on what this concurrent program does.
What is the recommended number of rows that should be maintained in the fnd_concurrent_requests table to avoid performance problems?
if more than 500000 records, the it might create problems.
Purge the table on a regular basis using the Purge Concurrent Request and/or Manager Date concurrent request and defragmentation should give the best performance. Running the request during non-peak times is recommended.
Should one need to run the Concurrent Manager Recovery in OAM after increasing the cache size = 2 * target process followed by bounce?
Only bouncing of the Concurrent Manager is required no need to run the Concurrent Manager Recovery.
Is It GOOD to run Gather Schema Statistics on FND tables?
Yes one can run it for Individual modules including FND.
What is the cause for getting locks on a Request file?
One needs to analyze the locks. The request file would not have been completed properly if it is being used by some other process. This can cause the lock.
When setting the Purge program parameters, there is a parameter called 'Purge Other'. What is this referring to?
"Purge Other" Select
"No" Do not delete records from FND_DUAL.
"Yes" Delete records from FND_DUAL.
Please refer to the Oracle Administration System Administration Guide for more guidance.
Why do we need to analyze all index column stats after running gather schema stats. In other words, why gather schema stats doesn't gather column stats?
One should use "Gather Column Statistics" in place of "Analyze All Index Column Statistics".'Bucket size' - Number of buckets to be created for the Histogram. The default is 254. It's a good value to start with that. Please refer to the seeded histograms in FND_HISTOGRAM_COLS with 'Bucket Size' 254.
In fact, run the 'Gather Schema Statistics' (for ALL schema) at regular intervals. After gathering the schema level statistics, this program creates the histogram for the specified columns in the FND_HISTOGRAM_COLS tables.
Hence one does not need to run 'Column Statistics' regularly.
SQL > select table_name,COLUMN_NAME,HSIZE from FND_HISTOGRAM_COLS
where table_name like 'FND_CONC%';
TABLE_NAME COLUMN_NAME HSIZE (Bucket Size)------------------------------ ------------------------------ ----------
FND_CONCURRENT_PROCESSES PROCESS_STATUS_CODE 254
FND_CONCURRENT_REQUESTS PHASE_CODE 254
FND_CONCURRENT_REQUESTS STATUS_CODE 254
Note:
1. Running "Gather Column Statistics" if really required. No need not run regularly.
2. In general, its recommended to run 'Gather Schema Statistics' at regular intervals which takes care of histograms.
1. Running "Gather Column Statistics" if really required. No need not run regularly.
2. In general, its recommended to run 'Gather Schema Statistics' at regular intervals which takes care of histograms.
We recently upgraded the database 11.2.0.1 to 11.2.0.2 and observed that jobs submitted though CONCSUB are in the PENDING STANDBY status for very long time.
It means that no manager is assigned for this request.
This could also mean that there is a conflict and CRM is not releasing it.
Need to check for incompatibilities and specialization rules to see if
there are conflicts.
How often is it recommended to run the Analyze All Index column Statistics?
One should use "Gather Column Statistics" in place of "Analyze All Index Column Statistics" because Analyze All index column statistics obsoleted. It can run every week.
Is it recommended to schedule a job that gets statistics a 100% of estimate percent on the tables listed with the package FND_STATS?
Yes. One can set the highest statistics is the highest accuracy.
If we want to run Analyze on all the tables in order to get statistics chained rows, will it cause any problems to do this, if we run Gather Schema Statistics afterwards?
Yes. It will cause a performance issue.
what are the other options for the entity parameter of the FNDCPPUR job (presentation just showed all)?
One can use the following note which has all the details: Concurrent Processing Tables and Purge Concurrent Request and/or Manager Data Program (FNDCPPUR) (Document 104282.1)
Can we run the statistics on these FND tables from backend?
Yes one can run statistics from back end using the api "exec fnd_stats.gather_table_stats".
Ex: exec fnd_stats.gather_table_stats ('APPLSYS','FND_CONCURRENT_REQUESTS',PERCENT=>99);
Where can I set the work shift?
Using the System Administrator responsibility, navigate: Concurrent Manager define -> WorkshiftsWhat happens if a job runs and it doesn't finish during the workshift? Will the manager not finish the job till the next workshift?
The workshift is only to pick the request. If it is in running status then it is not an issue. The request runs even if the workshift time limit ends.
Is performance affected when we have all the managers active 24 hrs every day?
No. When the server has enough resources to manage the processes run by a manager, it is not an issue. Every request will be processed by their own
managers based on the specialization as it will help to process the requests from the queue faster.
managers based on the specialization as it will help to process the requests from the queue faster.
By giving only the time for which we want our specialized manager to work in the Description, will it work?
No. It will not work. One will have to create a workshift and assign that workshift to the manager.
When 2 requests are being processed, we see that one ouput gets created with 0 filesize and the 2nd one containing data/content from xml files of both the requests
There is a known issue like one request gets output of another request. It would be suggested open a SR or post it in the communities to help further.
The scalable flag help the large reports to end creating temp files, but could affect the small ones creating this tmp files?
One can set this at the template or data definition level so that it is not affecting all other reports.
What is RAC?
RAC = Real Application Cluster for the Database Tier. Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications. Oracle RAC is a key component of Oracle's private cloud architecture. Oracle RAC support is included in the Oracle Database Standard Edition for higher levels of system uptime.
Can we use this feature to set the job to run on a specific instance in R12.1, even if we run the SCAN feature of 11.2 DB version?
Yes. It's possible. Refer to the following note: Method Of Always Running A Specific Concurrent Program On One Particular Node(Document 1070621.1)
What is that target instance used for?
If one needs a request to be run on specific instance in a RAC environment.
Is the Target RAC not available in 11.5.10.2?
No, this is not available until 12.1.
Where can I get the the usage for all profile option?
Please refer to the System Adminitrator's Guide that has profile option information.
What is the performance percentage difference between pipe vs QUEUE type. PIPE has faster performance. Does it give double or triple performance than queue?
OS pipe are always faster then advance que. If one is using pipe's, then few mangers need to be defined on every node for advance que is not required. Good performance use pipes for less maintence than using queues. Pipes are more efficient than Queues, but it requires a Transaction Manager to be running on each Database Instance (RAC). However, one might want to use 'Queue' for easy maintenance.
This comment has been removed by a blog administrator.
ReplyDelete