Monday, August 24, 2015

E-Business Suite Clone Log Parser Utility (Rapid Clone 11i,12.0, 12.1

Skip to content



Download the Latest Version of the Clone Log Parser (v2.3.1)

Click Here To Download

In this Document

General Information
Benefits
Installation
Basic Usage
Command Line Options
Log Parser Output
Troubleshooting and Reporting Problems
Related Resources
ERs and Bugs

Communicate with the creator of the Clone Log Parser, post feedback, troubleshoot problems and much more in Oracle Communities:

    Click Here To Access the Log Parser Thread

General Information 

There are 2 videos for the Clone Log Parser:
 Video (1 of 2) Download / Install / Run / Locate Report (8 min) 

 Video (2 of 2) Report Features (7 min) 
The Clone Log Parser Utility is provided by the E-Business Suite Proactive Services Team. The purpose of the utility is to consolidate error information from clone log files from various locations into an HTML report. A single cloning session may generate up to a dozen separate log files, the Log Parser allows you to view relevant snippets of each log at one time. Additionally the Log Parser will do some basic configuration and health checks of the environment and also may provide leads to solving problems.
Log Parser works on 11i (11.5.9+) and R12.0.x, R12.1.x. (12.2.x is not currently supported). Log Parser will report on all cloning, oraInventory and relinking logs in the following list. (To add additional logs, please leave a comment on this document or in the Oracle Communities thread):
  • StageDBTier.log
  • StageAppsTier_.log
  • ApplyDBTier.log
  • ApplyDatabase.log
  • ApplyDBTechStack.log
  • ApplyAppsTechStack.log
  • ApplyAppsTier.log
  • CloneContext.log
  • make_.log, make.log
  • adconfig.log
  • ohclone.log
  • adcrdb_.txt
  • NetServiceHandler.log
  • setup_stubs.log
  • Central/Global oraInventory/logs directory and all sub-directories,
    for any files with a .err, .log or .txt extension

Benefits


  • Zero configuration! Just unzip, run, view report
  • Save time validating clone logs - Consolidates over 12 cloning log file types in a single HTML report!
  • Fix errors - Parses log file; exposes log snippets showing errors & warnings
  • Links solutions to key error messages in My Oracle Support content, for instant error-fix recommendations

NOTE: The Log Parser is not currently supported on Windows platforms.

Installation

The utility is installed simply by unzipping the LogParser.zip file into the appropriate directory. A separate Log Parser installation is required for the dbTier and appsTier. The reports are also separate for each tier type.  The LogParser can be downloaded, installed, run and generate a report in under 10 minutes total time.
11i / R12 - Database Tier Install (any/all RDBMS versions)

Copy and unzip LogParser.zip, into $ORACLE_HOME/appsutil/clone thereby creating $ORACLE_HOME/appsutil/clone/LogParser/.
Example:
$ cp LogParser.zip $ORACLE_HOME/appsutil/clone
$ cd $ORACLE_HOME/appsutil/clone
$ unzip LogParser.zip

11i / R12 - Application Tier Install

Copy and unzip LogParser.zip, into /clone/ thereby creating /clone/LogParser/.

Example:
$ cp LogParser.zip /clone/
$ cd /clone/
$ unzip LogParser.zip

NOTE: To search and find logs properly, the Log Parser MUST BE installed into the proper directory noted above for the corresponding tier type. 

Optionally the Log Parser may be used to parse log files which do not currently reside on an EBS file system (such as those logs provided to Oracle Support or otherwise transferred from the original system). To accomplish this the Log Parser may be unzipped into virutally any directory on a UNIX/LINUX system. When used in this manner the command line argument searchDir= must be utilized to tell the Log Parser where to look for the clone logs and to bypass the usual search patterns. Log Parser uses $PWD to determine the tier type and which EBS release it's being run on and therefore which directories to look at for logs. The option searchDir= overrides the default search behavior. 

PERL Environment: The applications Perl environment is not required. $PERL5LIB should not be set. Perl 5.8 or higher is required.

Basic Usage

By default (without using any command line arguments) the Log Parser will look for all the latest logs for each log type. If there are 2 (or more) log files of the same type, for example, ApplyDBTier_07020904.log andApplyDBTier_07031024.log, the Log Parser will determine which file was created most recently and disregard the older file. 
 
The basic usage is as follows:
$ cd LogParser
$ perl LogParser.pl 

Command Line Options

Most users will not need to use any command line arguments. In most cases it is expected that the Log Parser will be run within a few days of encountering a cloning issue and in that case, no arguments should be required to see what happened during the clone. It is recommended that for the first few runs of LogParser.pl, you simply run it with no command line arguments and review the output. If the Log Parser is not outputting information in the HTML report that you are expecting or if you are having trouble with Log Parser automatically picking up certain directories (ie, INST_TOP on R12) then using the options below may help yield better results.

searchDir=
  • Specifies a single directory to search (along with all sub-directories) for Clone related logs. (No other directories are searched) 
  • Logs to be analyzed would need to be manually placed into the directory being specified by the searchDir= argument. 
  • Will report on all logs in the given directory - does not filter down to the latest log file. (The default bahavior outside this option is to filter logs to the latest one for each type.)  
  • Not valid with any other arguments. 
  • When using searchDir= LogParser can be installed or unzipped in nearly any directory- it does not self-detect the install directory when this option is used. 
  • Example:
    $ perl LogParser.pl searchDir=/tmp/logs
history=
  • Valid parameter is a number 1-90 (days).
  • Overrides the default search behavior for Log Parser. Log Parser, by default, searches for the most recent log for each type. (There are nearly a dozen different types of logs created by a full clone.) Therefore, by default, it does not matter how old the log file is, the most recent log is always the only one shown in the HTML report. By using the history= option, Log Parser shows any files newer than the history= days. 
  • Example:
    $ perl LogParser.pl history=30
instTop=
  • Only valid on an R12 Application Tier
  • Specifies the INST_TOP, typically only needed when Log Parser cannot determine the value for INST_TOP 
  • Example:
    $ cd /clone/LogParser
    $ perl LogParser.pl instTop=/u02/oracle/R121TEST/inst/apps/R121TEST_atg0014-sun
adconfigOnly=
  • Parses only the latest Autoconfig log found (adconfig.log) 
  • Valid on 11i or R12 appsTier
    • For 11i: Requires APPL_TOP environment variable set properly (script will prompt for a value if APPL_TOP is not set) 
    • For R12: Requires INST_TOP environment variable set properly (script will prompt for a value if INST_TOP is not set) 
  • Example:
  • $ cd /clone/LogParser
    $ perl LogParser.pl adconfigonly=r12

NOTE: The following 3 options are for 11i Application Tier only and must be used together without any other command line options. 

applTop=
  • Only valid on an 11i Application Tier
  • Specifies an 11i $APPL_TOP directory, typically only required in cases where Log Parser cannot determine the $APPL_TOP directory on a previous run
  • Must be used in conjunction with oh806= & ias= (order is not important)
  • Example:
    $ perl LogParser.pl oh806=/oracle/115102/TESTora/8.0.6 ias=/oracle/115102/TESTora/iAS applTop=/oracle/115102/TESTappl
oh806=
  • Only valid on an 11i Application Tier
  • Specifies the 8.0.6 $ORACLE_HOME directory, typically only required in cases where Log Parser cannot determine the $APPL_TOP/admin directory on a previous run
  • Must be used in conjunction with applTop= & ias= (order is not important)
  • Example:
    $ perl LogParser.pl oh806=/oracle/115102/TESTora/8.0.6 ias=/oracle/115102/TESTora/iAS applTop=/oracle/115102/TESTappl
ias=
  • Only valid on an 11i Application Tier
  • Specifies the $iAS base directory, typically only required in cases where Log Parser cannot determine the $IAS directory on a previous run
  • Must be used in conjuction with 806= & applTop= (order is not important)
  • Example:
    $ cd /clone/LogParser
    $ perl LogParser.pl ias=/oracle/115102/TESTora/iAS oh806=/oracle/115102/TESTora/8.0.6 applTop=/oracle/115102/TESTappl


Log Parser Output

Splash
HTML Report File
The Log Parser creates a single HTML output file each time it is run. The HTML report file is located under the "LogParser" base directory, in the "reports" sub-directory and is named either "CloneReport.html" or Adconfig. 
View the sample report: Apps Tier DB Tier
View the sample terminal output: Terminal Output Example  (can be viewed in a browser)

Reading the HTML Report
The basic function of the Log Parser is to look for specific error strings in any log file created by cloning. When a matching error is found, that error is written in colored text (orange, red or purple) to the HTML report along with 5 lines before and after the error. When an error line is found, a hyperlink is provided on the left side for the Line #: text. Rolling over the link or clicking on it will provide some general information about the error message and in some cases, provides a My Oracle Support Doc ID.
The log files reported upon in the HTML report are listed (top to bottom) from the most recently updated file, to the earliest updated. Generally, when a clone fails - the last log updated will have the best clues or errors about why the clone failed.

Troubleshooting and Reporting Problems

Communities Clone Log Parser Thread -Communicate with the creator of the Clone Log Parser, post feedback, troubleshoot problems and much more in Oracle Communities
The Log Parser creates only a single HTML report file and does not create it's own log. Any available information about what is being done is either input into the HTML report or output to the terminal screen (STDOUT). If required, all the information output to the terminal STDOUT can be "tee'd" into a log file with the following command:
$ perl LogParser.pl 2>&1 | tee myParserLog.log 
 A few of the common failures or problems which may be encountered are:
1) Complex or non-standard directory structure.
  •  In the event that there is a directory structure being used, especially on the Apps Tier, that Log Parser cannot seem to handle, use the command line arguments to provide the required directories. If nothing seems to help, please leave a comment on this document including details about the file system. 
2) Obscure PERL version being used. Perl 5.8.0 or higher is required.
  • Most EBS systems should have 2 or 3 different PERL installations to choose from. If there is a PERL-type error message being encountered, please try a different version of PERL and unset the $PERL5LIB variable if it's set.
  • Make sure that Log Parser was installed properly into either $ORACLE_HOME/appsutil/LogParser/LogParser.pl (dbTier) or /clone/LogParser/LogParser.pl (AppsTier).
3) Unexpected results in the HTML file or no results in the HTML file.
  • If there's a specific log file which is not being shown in the HTML report, check the log for the last update date. Especially if another log file of the same type is being shown.
  • Use the history= parameter to gather all logs (multiple logs for each type, vs. default behavior of 1 log of each type). If this works, then the problem is that the Log Parser is determining the dates incorrectly or the users perception of which log should be shown is incorrect. Please leave a comment on this document for any persistent issues.
 4) $PER5LIB environment variable should not be set when using the Log Parser.
Additional Assistance: Please do not log an SR for problems with Log Parser. You may either leave a comment on this document or preferably, post in the Log Parser Community Thread.

Related Resources

Oracle Communities Log Parser Thread 

   Click Here To Access the Log Parser Thread

Knowledge Other Diagnostic Scripts Created by the Proactive Services Team
  • Document 1411723.1 Concurrent Processing - CP Analyzer for E-Business Suite
  • Document 1369938.1 Workflow Analyzer script for E-Business Suite Workflow Monitoring and Maintenance

Enhancements and Bugs

DateDescriptionStatus
14-MAR-2014Revisit/revamp PER5LIB and environment setup in LogParser.plWIP
14-MAR-2014If a Sanity check fails, we should not "die".. simply log a message that the check failed.
Example is if CONTEXT_FILE cannot be opened in check_s_db_file_utl_dir.
WIP

Get the Log Parser

Click Here To Download
Didn't find what you are looking for?

Friday, August 14, 2015

R12.2 How To Re-attach Oracle Homes To The Central Inventory

In this Document
Goal
Solution


APPLIES TO:

Oracle Applications Manager - Version 12.2.2 and later
Information in this document applies to any platform.

GOAL

This document explains how to re-attach 12.2 EBS Oracle Homes to the central inventory.
There could be a number of reasons why you wish to re-attach the Oracle Homes. It could be that you have experienced one of the following 2 errors during upgrade or Autoconfig runs :
1/ INST-07536 and INST-07531, Middleware home location specified does not have the required Oracle homes installed

 2/ oraInventory/ContentsXML/inventory has no homes listed
  

SOLUTION

Before looking at a solution, let us observe a working inventory.xml file. This file is located in







   10.1.0.6.0
   2.1.0.6.0




  
     
  


  
     
  




  
     
  


  
     
  



Note, we have 4 application tier homes registered per file system, here is an example:
fs1





fs2



Database


You can re-register any one of the 3 FMW application tier homes as follows. Important, you must run these for both file systems (fs1 and fs2)
/FMW_Home/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="" ORACLE_HOME_NAME="Oracle Home Name" CLUSTER_NODES="{}"
/FMW_Home/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="" ORACLE_HOME_NAME="Oracle Home Name" CLUSTER_NODES="{}"

e.g.
/u01/test/fs1/FMW_Home/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u01/test/fs1/FMW_Home/oracle_common" ORACLE_HOME_NAME="VIS_TOOLS__u03_VIS_fs1_EBSapps_10_1_2" CLUSTER_NODES="{}"
/u01/test/fs2/FMW_Home/oracle_common/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u01/test/fs1/FMW_Home/oracle_common" ORACLE_HOME_NAME="VIS_TOOLS__u03_VIS_fs2_EBSapps_10_1_2" CLUSTER_NODES="{}"

If you need to register the tools (10.1.2) home you can use these commands, again you will need to run this for both fs1 and fs2:
/EBSapps/10.1.2/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="" ORACLE_HOME_NAME=""
/EBSapps/10.1.2/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="" ORACLE_HOME_NAME=""

e.g.
/u01/test/fs1/EBSapps/10.1.2/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u01/VIS/fs1/EBSapps/10.1.2" ORACLE_HOME_NAME="VIS_TOOLS__u03_VIS_fs1_EBSapps_10_1_2"
/u01/test/fs2/EBSapps/10.1.2/oui/bin/runInstaller -silent -attachHome -invPtrLoc /etc/oraInst.loc ORACLE_HOME="/u01/VIS/fs2/EBSapps/10.1.2" ORACLE_HOME_NAME="VIS_TOOLS__u03_VIS_fs2_EBSapps_10_1_2"

Make sure each ORACLE_HOME_NAME is unique. Once you have completed the steps for all missing Oracle Homes confirm that the inventory.xml file now contains all missing homes. Log files for the attachment process can be found at oraInventory/logs

ERROR: RC-50014: Fatal: Execution of AutoConfig was failed OEL 6.7

WARNING: [AutoConfig Error Report]
The following report lists errors AutoConfig encountered during each
phase of its execution. Errors are grouped by directory and phase.
The report format is:
 

  [SETUP PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: /oracle/u02/d02/fs1/inst/apps/UAT_gtest/admin/install
      adgendbc.sh             INSTE8_SETUP       127
      adgenjky.sh             INSTE8_SETUP       127
      afcpnode.sh             INSTE8_SETUP       127
      afgcsreg.sh             INSTE8_SETUP       127

  [PROFILE PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: /oracle/u02/d02/fs1/FMW_Home/webtier/perl/bin/perl -I /oracle/u02/d02/fs1/FMW_Home/webtier/perl/lib/5.10.0 -I /oracle/u02/d02/fs1/FMW_Home/webtier/perl/lib/site_perl/5.10.0 -I /oracle/u02/d02/fs1/EBSapps/appl/au/12.0.0/perl -I /oracl                                            e/u02/d02/fs1/FMW_Home/webtier/ohs/mod_perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /oracle/u02/d02/fs1/inst/apps/UAT _gtest/admin/scripts/adexecsql.pl sqlfile=/oracle/u02/d02/fs1/inst/apps/UAT_gtest/admin/install
afwebprf.sql INSTE8_PRF         127
amscmprf.sql INSTE8_PRF         127
amswebprf.sql INSTE8_PRF         127
clnadmprf.sql INSTE8_PRF         127
      cncmprf.sql             INSTE8_PRF         127
cseadmprf.sql INSTE8_PRF         127
csfadmprf.sql INSTE8_PRF         127
csiadmprf.sql INSTE8_PRF         127
eamadmprf.sql INSTE8_PRF         127
      fteadmprf.sql           INSTE8_PRF         127
oksfrmprf.sql INSTE8_PRF         127
txkappsprf.sql INSTE8_PRF         127
wshadmprf.sql INSTE8_PRF         127
    Directory: /oracle/u02/d02/fs1/inst/apps/UAT_gtest/admin/install
      adadmprf.sh             INSTE8_PRF         127
      afadmprf.sh             INSTE8_PRF         127
      afcpctx.sh              INSTE8_PRF         127
      afcpgsm.sh              INSTE8_PRF         127
ibywebprf.sh INSTE8_PRF         127
      igccmprf.sh             INSTE8_PRF         127
      jtfictx.sh              INSTE8_PRF         127
okladmprf.sh INSTE8_PRF         127
      txkJavaMailerCfg.sh     INSTE8_PRF         127
txkWebServicescfg.sh INSTE8_PRF         127

  [APPLY PHASE]
  AutoConfig could not successfully execute the following scripts:
    Directory: /oracle/u02/d02/fs1/FMW_Home/webtier/perl/bin/perl -I /oracle/u02/d02/fs1/FMW_Home/webtier/perl/lib/5.10.0 -I /oracle/u02/d02/fs1/FMW_Home/webtier/perl/lib/site_perl/5.10.0 -I /oracle/u02/d02/fs1/EBSapps/appl/au/12.0.0/perl -I /oracl e/u02/d02/fs1/FMW_Home/webtier/ohs/mod_perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /oracle/u02/d02/fs1/inst/apps/UAT _gtest/admin/install
      adadmat.pl              INSTE8_APPLY       127
    Directory: /oracle/u02/d02/fs1/FMW_Home/webtier/perl/bin/perl -I /oracle/u02/d02/fs1/FMW_Home/webtier/perl/lib/5.10.0 -I /oracle/u02/d02/fs1/FMW_Home/webtier/perl/lib/site_perl/5.10.0 -I /oracle/u02/d02/fs1/EBSapps/appl/au/12.0.0/perl -I /oracl e/u02/d02/fs1/FMW_Home/webtier/ohs/mod_perl/lib/site_perl/5.10.0/x86_64-linux-thread-multi /oracle/u02/d02/fs1/inst/apps/UAT _gtest/admin/scripts/adexecsql.pl sqlfile=/oracle/u02/d02/fs1/inst/apps/UAT_gtest/admin/install
txkGenADOPValidNodes.sql INSTE8_APPLY       127



AutoConfig is exiting with status 29

ERROR: RC-50014: Fatal: Execution of AutoConfig was failed
Raised by oracle.apps.ad.clone.ApplyApplTop

END: Executed runAutoConfig...

START: Executing /oracle/u02/d02/fs1/inst/apps/UAT_gtest/admin/install/txkWfClone.sh -nopromptmsg
  1. txkWfClone.sh exited with status 127
ERROR: txkWfClone.sh execution failed, exit code 127


Solution :

yum install glibc-devel.i686

Saturday, August 8, 2015

Concurrent Processing - Performance


QUESTIONS AND ANSWERS

What are the Focus areas in an AWR report?

    Automatic workload repository (AWR) is a collection of persistent system performance statistics owned by SYS.  It resides in the SYSAUX tablespace.  By default snapshots are generated once every 60 minutes and maintained for 7 days.

How to get the trace information for a custom program?

  Go to the System Administrator/Application Developer Responsibility -> Concurrent Program -> Define form Query for Concurrent Program -> Enable Trace. (OR)
   One can enable trace when you submit the concurrent request -> Submit a New Request -> Click on Debug options -> Enable trace.
 
   After enabling trace, reproduce the issue.  One will be able to get the trace file in the udump location.  Collect the trace file and get the tkprof to understand which query takes time if it is rdf report run the query and do the tuning using the execution plan.

If the cache size is 10 and out of them 9 have completed and 1 is running long, in the next cycle will it cache 10 more or will it take up 9 more as one is already running?

      Cache size is like a buffer,  Only the buffer gets empty it will look for another set of request list,  So when one has a cache size set as 10, it always picks 10 requests at a time.
   Even if one gives a higher priority to a new request, that new request must wait until the buffer is empty and the manager returns to look at the requests list. That request may have to wait a long time if one sets the buffer size to a high number.

Can one specify the default trace location?

    The trace file will be located in the udump directory. Or make use of the following query:     
select name, value from v$parameter where name = 'user_dump_dest';

See: How To Trace a Concurrent Request And Generate TKPROF File (Document 453527.1)

When running 'Gather Schema Statistics' setting higher the estimate percent % will give better performance?

   Yes,  For better performance, the estimate percent must be higher.  Especially for huge schemas like APPLSYS, first size month should run with 99%.

Why the requests are not getting killed from the backend once cancelled from the Frontend?

    When cancelling requests from the front end, the fnd_concurrent_requests table would not have been updated as terminated for that request properly.  Hence, it is still showing as running at the database and Operating System level.
      In this case it is recommended to use the following sql:
             Connect as SQL*Plus apps/ and execute:
SQL>update FND_CONCURRENT_REQUESTS
set status_code='X', 
phase_code='C'
where request_id=;
SQL>commit;

Replace by appropriate request corresponding to request number having incorrect status.

Please share any note that gives the details of the things that happen in background when running a concurrent request?

   One can enable a trace when submitting the concurrent request -> Submit a New Request -> Click on Debug options -> Enable trace.
   One can choose the trace option which suites for the troubleshooting.  That gives the information on what this concurrent program does.

What is the recommended number of rows that should be maintained in the fnd_concurrent_requests table to avoid performance problems?

if more than 500000 records, the it might create problems.
Purge the table on a regular basis using the Purge Concurrent Request and/or Manager Date concurrent request and defragmentation should give the best performance.  Running the request during non-peak times is recommended.

Should one need to run the Concurrent Manager Recovery in OAM after increasing the cache size = 2 * target process followed by bounce?

Only bouncing of the Concurrent Manager is required no need to run the Concurrent Manager Recovery.

Is It GOOD to run Gather Schema Statistics on FND tables? 

Yes one can run it for Individual modules including FND.

What is the cause for getting locks on a Request file?

One needs to analyze the locks.  The request file would not have been completed properly if it is being used by some other process.  This can cause the lock.

When setting the Purge program parameters, there is a parameter called 'Purge Other'.  What is this referring to?

"Purge Other" Select 
"No" Do not delete records from FND_DUAL.
"Yes" Delete records from FND_DUAL.
Please refer to the Oracle Administration System Administration Guide for more guidance.

Why do we need to analyze all index column stats after running gather schema stats.  In other words, why gather schema stats doesn't gather column stats?

   One should use "Gather Column Statistics" in place of "Analyze All Index Column Statistics".
   'Bucket size' - Number of buckets to be created for the Histogram.  The default is 254.  It's a good value to start with that.  Please refer to the seeded histograms in FND_HISTOGRAM_COLS with 'Bucket Size' 254.
   In fact, run the 'Gather Schema Statistics' (for ALL schema) at regular intervals.  After gathering the schema level statistics, this program creates the histogram for the specified columns in the FND_HISTOGRAM_COLS tables.
Hence one does not need to run 'Column Statistics' regularly.
    
SQL > select table_name,COLUMN_NAME,HSIZE from FND_HISTOGRAM_COLS
      where table_name like 'FND_CONC%';
      TABLE_NAME COLUMN_NAME HSIZE (Bucket Size)
      ------------------------------ ------------------------------ ----------
      FND_CONCURRENT_PROCESSES PROCESS_STATUS_CODE 254
      FND_CONCURRENT_REQUESTS PHASE_CODE 254
      FND_CONCURRENT_REQUESTS STATUS_CODE 254
Note:
1. Running "Gather Column Statistics" if really required.  No need not run regularly.
2. In general, its recommended to run 'Gather Schema Statistics' at regular intervals which takes care of histograms.

We recently upgraded the database 11.2.0.1 to 11.2.0.2 and observed that jobs submitted though CONCSUB are in the PENDING STANDBY status for very long time.

It means that no manager is assigned for this request.
This could also mean that there is a conflict and CRM is not releasing it.
Need to check for incompatibilities and specialization rules to see if
there are conflicts.

How often is it recommended to run the Analyze All Index column Statistics?

   One should use "Gather Column Statistics" in place of "Analyze All Index Column Statistics" because Analyze All index column statistics obsoleted.  It can run every week.

Is it recommended to schedule a job that gets statistics a 100% of estimate percent on the tables listed with the package FND_STATS?

Yes.  One can set the highest statistics is the highest accuracy.

If we want to run Analyze on all the tables in order to get statistics chained rows, will it cause any problems to do this, if we run Gather Schema Statistics afterwards?

   Yes. It will cause a performance issue.

what are the other options for the entity parameter of the FNDCPPUR job (presentation just showed all)?

   One can use the following note which has all the details: Concurrent Processing Tables and Purge Concurrent Request and/or Manager Data Program    (FNDCPPUR) (Document 104282.1)

Can we run the statistics on these FND tables from backend?

   Yes one can run statistics from back end using the api "exec fnd_stats.gather_table_stats".
   Ex: exec fnd_stats.gather_table_stats ('APPLSYS','FND_CONCURRENT_REQUESTS',PERCENT=>99);

Where can I set the work shift?

   Using the System Administrator responsibility, navigate: Concurrent Manager define -> Workshifts

What happens if a job runs and it doesn't finish during the workshift?  Will the manager not finish the job till the next workshift?

   The workshift is only to pick the request.  If it is in running status then it is not an issue.  The request runs even if the workshift time limit ends.

Is performance affected when we have all the managers active 24 hrs every day?

   No.  When the server has enough resources to manage the processes run by a manager, it is not an issue.  Every request will be processed by their own
   managers based on the specialization as it will help to process the requests from the queue faster.

By giving only the time for which we want our specialized manager to work in the Description, will it work?

   No.  It will not work.  One will have to create a workshift and assign that workshift to the manager.

When 2 requests are being processed, we see that one ouput gets created with 0 filesize and the 2nd one containing data/content from xml files of both the requests

   There is a known issue like one request gets output of another request.  It would be suggested open a SR or post it in the communities to help further.

The scalable flag help the large reports to end creating temp files, but could affect the small ones creating this tmp files?

   One can set this at the template or data definition level so that it is not affecting all other reports.

What is RAC?

   RAC = Real Application Cluster for the Database Tier.  Oracle RAC is a cluster database with a shared cache architecture that overcomes the limitations of traditional shared-nothing and shared-disk approaches to provide highly scalable and available database solutions for all business applications.  Oracle RAC is a key component of Oracle's private cloud architecture. Oracle RAC support is included in the Oracle Database Standard Edition for higher levels of system uptime.

Can we use this feature to set the job to run on a specific instance in R12.1, even if we run the SCAN feature of 11.2 DB version?

   Yes.  It's possible.  Refer to the following note: Method Of Always Running A Specific Concurrent Program On One Particular Node(Document 1070621.1)

What is that target instance used for?

   If one needs a request to be run on specific instance in a RAC environment.

Is the Target RAC not available in 11.5.10.2?

   No, this is not available until 12.1.

Where can I get the the usage for all profile option?

   Please refer to the System Adminitrator's Guide that has profile option information.

What is the performance percentage difference between pipe vs QUEUE type.   PIPE has faster performance.  Does it give double or triple performance than queue?

   OS pipe are always faster then advance que.  If one is using pipe's, then few mangers need to be defined on every node for advance que is not required.  Good performance use pipes for less maintence than using queues.  Pipes are more efficient than Queues, but it requires a Transaction Manager to be running on each Database Instance (RAC).  However, one might want to use 'Queue' for easy maintenance.

Friday, August 7, 2015

[UNEXPECTED]Error occurred while performing database validations adop R12.2 phase=prepare

Step -1
[appl8020@ghorizon d02]$ . EBSapps.env

  E-Business Suite Environment Information
  ----------------------------------------
  RUN File System           : /app/8020/d02/fs1/EBSapps/appl
  PATCH File System         : /app/8020/d02/fs2/EBSapps/appl
  Non-Editioned File System : /app/8020/d02/fs_ne


  DB Host: ghorizon.galanaoil.com  Service/SID: CRP


  E-Business Suite Environment Setting
  ------------------------------------
  - Enter [R/r] for sourcing Run File System Environment file, or
  - Enter [P/p] for sourcing Patch File System Environment file, or
  - Enter anything else to exit

  Please choose the environment file you wish to source [R/P]:r

  Sourcing the RUN File System ...

[appl8020@ghorizon d02]$ echo $FILE_EDITION
run
[appl8020@ghorizon d02]$ pwd
/app/8020/d02
[appl8020@ghorizon d02]$ adop phase=prepare

Enter the APPS password:
Enter the SYSTEM password:
Enter the WLSADMIN password:

Validating credentials...

Initializing...
    Run Edition context  : /app/8020/d02/fs1/inst/apps/CRP_ghorizon/appl/admin/CRP_ghorizon.xml
    Patch edition context: /app/8020/d02/fs2/inst/apps/CRP_ghorizon/appl/admin/CRP_ghorizon.xml
    Patch file system freespace: 30.89 GB

Validating system setup...
    Node registry is valid.
    [ERROR]     Failed to execute SQL statement :
   declare
    l_msg varchar2(4000);
  begin
    ad_zd_adop.adop_database_validations(l_msg);
    dbms_output.put_line(l_msg);
  end;

    [ERROR]     Error Message :
    [UNEXPECTED]Error occurred while performing database validations


[STATEMENT] Please run adopscanlog utility, using the command

"adopscanlog -latest=yes"

to get the list of the log files along with snippet of the error message corresponding to each log file.
adop exiting with status = 1 (Fail)
[appl8020@ghorizon d02]$
Step 2 –




Solution:
ALTER SYSTEM SET "_system_trig_enabled" = TRUE SCOPE=BOTH;