Pages

Wednesday, August 16, 2017

How To Collect Log Files from Cloning in 12.2




How To Collect Log Files from Cloning in 12.2

Oracle Applications Manager - Version 12.2 to 12.2.5 [Release 12.2Cloud to 12.2]
SOLUTION

The cloning process in R12.2 creates log files in several locations:





1. OraInventory logs:

/oraInventory/logs





2. PRECLONE logs from the source server:

adpreclone logs database tier:

$ORACLE_HOME/appsutil/log//StageDBTier_.log

adpreclone logs Applications tier:

$INST_TOP/admin/log/clone/StageAppsTier_.log






3. ADCFGCLONE logs from target server:

adcfgclone logs database tier:

$ORACLE_HOME/appsutil/clone/bin/CloneContext_.log

This log shows the entries selected during execution of adcfgclone command

$ORACLE_HOME/appsutil/log//ApplyDBTier_.log

This log shows the results for the execution of the clone

adcfgclone logs Applications tier:

$COMMON_TOP/clone/bin/CloneContext_.log

This log shows the entries selected during execution of adcfgclone command

$COMMON_TOP/clone/FMW/logs/prereqcheck.log

This log shows the pre-requisites for FMW installation.

$INST_TOP/admin/log/clone_/ApplyAppsTier_.log

This log shows the results for the execution of the clone.

There are some other logs under $INST_TOP/admin/log/clone, so it is OK to request customer to upload a zip file with this directory for review.

If providing for support, compress and upload the zip of the directory $APPLRGF/TXK/ohsCloneLog ( this captures cloning failures for OHS ).







4. To obtain all the logs via zip command:

The 12.2 Clone log can also be collected using following zip method:

On the target database tier, issue the following zip command:

$ zip -r /tmp/$TWO_TASK'_''uname -n'_'date +%m%d%y.%H%M'_DB_Clone_logs.zip \
$ORACLE_HOME/appsutil/clone/bin/CloneContext_*.log \
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/NetServiceHandler.log \
$ORACLE_HOME/appsutil/log/$CONTEXT_NAME/ApplyDBTier_*.log

On the target application tier, issue the following zip command:

$ zip -r /tmp/$TWO_TASK'_''uname -n'_'date +%m%d%y.%H%M'_APP_Clone_logs.zip \
$INST_TOP/admin/log/* \
$INST_TOP/admin/oraInventory/logs/* \
$APPLRGF/TXK/ohsCloneLog/* \
$COMMON_TOP/clone/bin/CloneContext_*.log

Upload the Clone_logs.zip files generated in steps 2 and 3. The files will be in the /tmp directory.

Saturday, August 12, 2017

Oracle E-Business Suite Release 12.2: Validations Performed By the adop Online Patching Utility

 

Oracle E-Business Suite Release 12.2: Validations Performed By the adop Online Patching Utility

     


The most current version of this document can be obtained in My Oracle Support Knowledge Document 1678355.1.
A change log is available at the end of this document.
This document is divided into the following main sections:
Section 1: Overview
Section 2: List of Validations
Section 1: Overview
This document lists validations that have been added to identify various issues in different phases of the online patching cycle. These validations are designed to verify setup, environment or other issues that might otherwise result in failure of fs_clone. In addition, their existence facilitates debugging of fs_clone errors.
The full list of issues is:
Environment setup or issues
Improper code level
Port issues
Non-compliance to standards
Documentation not followed
The validations have been added into the Delta 4 consolidated patches to avoid such situations. The Delta 4 consolidated patches are for application on top of R12.AD.C.Delta.4 and R12.TXK.C.Delta.4. The validations will be also incorporated into all subsequent AD and TXK release update packs.
The Delta 4 consolidated patch numbers are:
CONSOLIDATED FIXES FOR R12.AD.C.DELTA.4 (18491990:R12.AD.C)
CONSOLIDATED FIXES FOR R12.TXK.C.DELTA.4 (18497540:R12.TXK.C)
For more information on applying these patches, refer to:
My Oracle Support Knowledge , Oracle E-Business Suite Release 12.2: Now Available - Essential Consolidated Rollup Patches for AD Delta 4 and TXK Delta 4
For information about applying the R12.AD.C.Delta.4 and R12.TXK.C.Delta.4 release update packs themselves, refer to:
My Oracle Support Knowledge , Applying the R12.AD.C.Delta.4 and R12.TXK.C.Delta.4 Release Update Packs
Section 2: List of Validations
This section describes the validations added.
Key points:
Each validation is represented in form of a table.
These validations are performed in different phases of adop. In a multi-node system, some validations may only be executed only on the primary node, while others may be performed on all nodes.
On detection of a failure or other problem, the validations will generate either warnings or errors. There are two levels of error messages: high-level and detailed.
On a warning, adop operation can proceed, but the cause should be investigated in case it leads to other issues later. In the case of an error, further processing flow is blocked until the issue is fixed.
Each validation warning or error message also provides suggested corrective action.
Note: The table sequence represents the order in which the validations are performed.
1. Check all node entries in fnd_nodes table
Validation Check all node entries in fnd_nodes table
Execution happens on Primary node
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Information missing in FND_NODES table for one or more application tier nodes. For details, refer to log file on the relevant node.
Detailed error message ERROR: Missing entries in FND_NODES table.
Nodes present in FND_NODES:
Nodes missing from FND_NODES:
Corrective Action: Run AutoConfig on the run file system of the nodes that are missing from the FND_NODES table.
Method Name validateFndNodes
2. Check all nodes have context files in FND_OAM_CONTEXT_FILES table for both run and patch file systems
Validation Check all nodes have context files in FND_OAM_CONTEXT_FILES table for both run and patch file systems
Execution happens on Primary node
adop phase fs_clone, prepare
Warning or error? Error
Basic error message There is a missing entry in the FND_OAM_CONTEXT_FILES table for at least one application tier node context file.
Detailed error message ERROR: Nodes with context files in the FND_OAM_CONTEXT_FILES table on both run and patch file systems:
Nodes without context files in the FND_OAM_CONTEXT_FILES table on either/or run and patch file systems:
Corrective Action:
- If the run file system context file for a node is missing, run AutoConfig on the run file system of that node to synchronize with the value with the database.
- If the patch file system context file of a node is missing, run AutoConfig on the patch file system of that node with the -syncctx option as follows to synchronize with the value with the database.
UNIX:
$ sh /bin/adconfig.sh contextfile= -syncctx
Windows:
C:\>\bin\adconfig.cmd contextfile= -syncctx
Method Name validateFndOamContextFiles
3. Checks related to S_JDKTOP and S_FMW_JDKTOP
Validation Checks related to S_JDKTOP and S_FMW_JDKTOP
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Value set for JDK_TOP (context variable s_jdktop) is invalid
Detailed error message 1. Sub-Validation: Check if jdktop and fmw_jdktop are pointing to the correct location.
Error message:
ERROR: The value of the s_jdktop context variable is incorrect.
Corrective Action: Contact Oracle Support to identify the best course of action.

2. Sub-Validation: Check if s_jdktop value is same on the run file system and database.
Error message:
ERROR: The value of the context variable s_jdktop for the run file system is not consistent between the file system and the database.
Corrective Action: Correct the value in the context file, and run AutoConfig to sync with the value in the database.

3. Sub-Validation: Check if s_fmw_jdktop value is same on run file system and database.
Error message:
ERROR: The value of the context variable s_fmw_jdktop for the run file system is not consistent between the file system and the database.
Corrective Action: Correct the value in the context file, and run AutoConfig to sync with the value in the database.

4. Sub-Validation: Check that s_jdktop has the same value on the current node and primary node.
Error message:
ERROR: Value of s_jdktop is not consistent with the value on the primary node.
Corrective Action: Update the value of s_jdktop to match the value on the primary node.

5. Sub-Validation: Check if s_fmw_jdktop has the same value on the current node and primary node.
Error message:
ERROR: Value of s_fmw_jdktop is not in sync with the value on the primary node.
Corrective Action: Update the value of s_fmw_jdktop to match the value on the primary node.

6. Sub-Validation: If JDK version is greater than 1.6, TXK code level should be greater than C.3
Error message:
ERROR:
JDK version:
TXK code level:
Release 12.2 Delta 3 or higher requires a TXK code level with a JDK version higher than 1.6. This instance is at a lower TXK code level.
Corrective Action: Contact Oracle support to identify the best course of action.
Method Name validateJDKTop
4. Check consistency and validity of Oracle Inventory setup
Validation Check consistency and validity of Oracle Inventory setup
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message At least one Oracle inventory check has failed.
Detailed error message 1. Sub-Validation: Oracle Global Inventory does not exist.
Error message:
ERROR: Oracle Global Inventory does not exist.
Corrective Action: Provide the location of a valid inventory file.

2. Sub-Validation: Oracle Inventory Pointer location directory doesn't exist.
Error message:
Oracle Inventory Pointer location directory doesn't exist.
Corrective Action: Provide the location of a valid inventory file.

3. Sub-Validation: Oracle home directory is not present
Error message:
registered in inventory does not exist on file system.
Corrective Action: Provide the location of a valid inventory file.

4. Sub-Validation: Oracle home directory is not registered in the file system.
Error message:
ERROR: is not registered in the inventory
Corrective Action: Provide the location of a valid inventory file. If you believe the inventory is valid, you may want to attach the .

5. Sub-Validation: does not exist on file system but still attached in inventory.xml.
Error message:
does not exist on file system but still attached in inventory.xml.

6. Sub-Validation: Either doesn't exist in inventory.xml or detached from inventory.xml.
Error message:
Either doesn't exist in inventory.xml or detached from inventory.xml.

7. Sub-Validation: The user does not have write permission for inventory xml file.
Error message:
ERROR: The user does not have write permission for .
Corrective Action: Provide correct permissions for inventory.xml

8. Sub-Validation: The xml file either doesn't exist or doesn't have read permission.
Error message: Either file doesn't exist or doesn't have READ permission.

9. Sub-Validation: Dependency between webtier and oracle_common Oracle Home in inventory.
Error message:
Corrective Action: Resolve the dependency between Oracle_Home and in the inventory.
Method Name validateInventory
5. Check if APPL_TOP name is same in database and run file system
Validation Check if APPL_TOP name is same in database and run file system
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message APPL_TOP name is not set up properly for the current node.
Detailed error message ERROR: The value of the context variable s_atName is not consistent between the run context file in the file system and the database.
Corrective Action: Correct the value of s_atName in the run context file, then run AutoConfig to sync with the value in the database.
Method Name validateAtName
6. Check if instantiate instructions in the readme have been followed correctly
Validation Check if instantiate instructions have been followed correctly
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message OHS configuration files have not been instantiated from the latest AutoConfig templates.
Detailed error message ERROR: File versions mismatch.
Template File:
Target File:

ERROR: Oracle HTTP service configuration file version does not match corresponding AutoConfig template file.
Corrective Action: Refer to My Oracle Support Knowledge Document 1617461.1 to apply the latest AD and TXK release update packs.

Method Name validateOHSTmplInst
7. Check if custom product tops have been added correctly
Validation Check if custom product tops have been added correctly
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Custom product tops have not been added correctly.
Detailed error message ERROR: The following custom products have been added inappropriately:
Corrective Action: You can refer to My Oracle Support Knowledge Document 1577707.1 to add custom products. If you face any issue, you may contact Oracle Support to identify the best course of action.
Method Name validateCustProdTop
8. Check if AD and TXK code levels match
Validation Check if AD and TXK code levels match
Execution happens on Primary node
adop phase fs_clone, cutover
Warning or error? Error
Basic error message TXK RUP has been applied without applying the corresponding AD RUP.
Detailed error message ERROR: AD and TXK code levels are not in sync.
TXK code level:
AD code level:
Corrective Action: Apply the required RUP.
Method Name validateADTXKOrder
9. Check if adminserver status and web admin status are in sync
Validation Check if adminserver status and web admin status are in sync
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Inconsistency detected in context file Admin Server information between nodes or file systems.
Detailed error message 1. Sub-Validation: check the values of s_adminserverstatus between RUN and PATCH file system.
Error message:
Value of s_adminserverStatus context variable is not consistent between run and patch file systems. Admin Server can be enabled only on the primary node.
Corrective Action: Amend value of s_adminserverstatus context variable.

2. Sub-Validation: check the values of s_adminserverstatus between RUN file system and database
Error message:
ERROR: Context file value of s_adminserverstatus on run file system is not consistent between the file system and database context file.
Corrective Action: Correct the value of the s_adminserverstatus context variable in the run file system context file, then run AutoConfig.
3. Sub-Validation: check the value of s_web_admin_status between RUN file system and database
Error message:
ERROR: Context file value of s_web_admin_status on run file system is not consistent between the file system and database context file.
Corrective Action: Correct the value of the s_web_admin_status context variable in the run file system context file, then run AutoConfig.

4. Sub-Validation: check the value of s_web_admin_status between RUN and PATCH file system
Error message:
Value of s_web_admin_status context variable is not consistent between run and patch file systems.
Corrective Action: Amend value of s_web_admin_status context variable
Method Name validateAdminSrvStatus
aalidateAdminSrvStatusAcrossFS
10. Check if multi-node related setup properties are correct
Validation Check if multi-node related setup properties are correct
Execution happens on All Nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Either the value of the context variable s_shared_file_system in the run file system is not consistent between the file system and the database or the APPL_TOP name across nodes is not set correctly.
Detailed error message 1. Sub-Validation: Check if s_shared_file_system is in sync across RUN file system and database.
Error message:
ERROR: The value of the context variables_shared_file_system is not consistent between the run file system context file and the database.
Corrective Action: Correct the value of s_shared_file_system in the run context file, and run AutoConfig to sync with the value in the database.

2. Sub-Validation: Check if context variables s_atName and s_shared_file_system are not in sync
Error message:
ERROR: s_atName and s_shared_file_system context variables are not in SYNC.
Corrective Action: Contact Oracle Support to identify the best course of action.
Method Name validateMultiNodeSFSSync
validateMultiNodeATName
11. Check if any ports being used by E-Business Suite are listed in /etc/services
Validation Check if any ports being used by Oracle E-Business Suite are listed in /etc/services
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Warning
Basic error message At least one of the required ports for patch file system is reserved in /etc/services
Detailed error message WARNING: This E-Business Suite instance is going to use the following ports listed in /etc/services.
If you want to use any of these ports for services which are not part of the E-Business Suite instance, you need to update /etc/services to use different ports for those non-EBS services.
Method Name validateETCServicesPorts
12. Check if any required ports on patch file system are already in use
Validation Check if any required ports on patch file system are already in use
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Some of the ports specified for the patch file system are not available.
Corrective Action: Free the listed ports and retry the adop operation.
Detailed error message ERROR: The following required ports are in use

Corrective Action: Free the listed ports and retry the adop operation.
Method Name validatePatchFSPortPool
13. Check if localhost entry exists in /etc/hosts
Validation Check if localhost entry exists in /etc/hosts
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Warning
Basic error message Either some of the required entries in /etc/hosts file might be missing (e.g. localhost
or hostname) OR the file /etc/hosts could not be read.
Detailed error message 1. Sub-Validation: more than one IP address with value of 127.0.0.1 found
Error message:
Corrective Action: Add multiple local hosts for the same IP address in the /etc/hosts file, as shown on the following line:
127.0.0.1

 
2. Sub-Validation: more than one IP address with value of host IP address found
Error message:
Corrective Action: Add multiple hosts for the same IP address in the /etc/hosts file, as shown on the following line:
...
Method Name validateETCHosts
14. Check if Weblogic Server domain is not in edit mode
Validation Check if Weblogic Server domain is not in edit mode
Execution happens on All application tier nodes
adop phase fs_clone, prepare
Warning or error? Error
Basic error message Domain is in edit mode.
Detailed error message Domain might be locked by some other WLS user process.
Corrective Action: Release the edit lock and then restart the script.
Method Name validateDomainEditable
15. Check edit.lok file to see if domain is in edit mode
Validation Check edit.lok file to see if domain is in edit mode
Execution happens on Primary node
adop phase fs_clone, cutover
Warning or error? Error
Basic error message Patch file system WebLogic domain edit lock is enabled.
Detailed error message ERROR: Edit session is enabled in the patch file system.
Corrective Action: Release the edit lock and then restart the script.
Method Name validateEditLockEnabled

manually apply weblogic patches



./bsu.sh -install -patch_download_dir=/u01/DBLTEST/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=9KCT -prod_dir=/u01/DBLTEST/fs1/FMW_Home/wlserver_10.3
./bsu.sh -install -patch_download_dir=/u01/DBLTEST/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=9KCT -prod_dir=/u01/DBLTEST/fs1/FMW_Home/wlserver_10.3
./bsu.sh -install -patch_download_dir=/u01/DBLTEST/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=9KCT -prod_dir=/u01/DBLTEST/fs1/FMW_Home/wlserver_10.3
./bsu.sh -install -patch_download_dir=/u01/DBLTEST/fs1/FMW_Home/utils/bsu/cache_dir -patchlist=9KCT -prod_dir=/u01/DBLTEST/fs1/FMW_Home/wlserver_10.3

Tuesday, February 28, 2017

Cloning Oracle E-Business Suite Release 12 RAC-Enabled Systems with Rapid Clone



This document describes the process of using the Oracle Applications Rapid Clone utility to create a clone (copy) of an Oracle E-Business Suite Release 12 system that utilizes the Oracle Database 10g, 11or 12c Real Application Clusters feature.
The resulting duplicate Oracle Applications Release 12 RAC environment can then be used for purposes such as:
  • Patch testing
  • User Acceptance testing
  • Performance testing
  • Load testing
  • QA validation
  • Disaster recovery
The most current version of this document can be obtained in OracleMetaLink Note 559518.1.
There is a change log at the end of this document.

In This Document

Note: At present, the procedures described in this document apply to UNIX and Linux platforms only, and are not suitable for Oracle Applications Release 12 RAC-enabled systems running on WIndows.
A number of conventions are used in describing the Oracle Applications architecture:
ConventionMeaning
Application tierMachines (nodes) running Forms, Web, and other services (servers). Also called middle tier.
Database tierMachines (nodes) running the Oracle Applications database.
oracleUser account that owns the database file system (database ORACLE_HOME and files).
CONTEXT_NAMEThe CONTEXT_NAME variable specifies the name of the Applications context that is used by AutoConfig. The default is [SID]_[hostname].
CONTEXT_FILEFull path to the Applications context file on the application tier or database tier.
APPSpwdOracle Applications database user password.
Source SystemOriginal Applications and database system that is to be duplicated.
Target SystemNew Applications and database system that is being created as a copy of the source system.
ORACLE_HOMEThe top-level directory into which the database software has been installed.
CRS_ORACLE_HOMEThe top-level directory into which the Cluster Ready Services (CRS) software has been installed.
ASM_ORACLE_HOMEThe top-level directory into which the Automatic Storage Management (ASM) software has been installed.
RMANOracle's Recovery Manager utility, which ships with the 10g, 11g and 12c Database.
ImageThe RMAN proprietary-format files from the source system backup.
Monospace TextRepresents command line text. Type such a command exactly as shown.
[ ]Text enclosed in square brackets represents a variable. Substitute a value for the variable text. Do not type the square brackets.
\On UNIX, the backslash character is entered at the end of a command line to indicate continuation of the command on the next line.

Section 1: Overview, Prerequisites and Restrictions

1.1 Overview

Converting Oracle E-Business Suite Release 12 from a single instance database to a multi-node Oracle Real Application Clusters (Oracle RAC) enabled database (described in OracleMetalink Note 388577.1) is a complex and time-consuming process. It is therefore common for many sites to maintain only a single system in which Oracle RAC is enabled with the E-Business Suite environment. Typically, this will be the main production system. In many large enterprises, however, there is often a need to maintain two or more Oracle RAC-enabled environments that are exact copies (or clones) of each other. This may be needed, for example, when undertaking specialized development, testing patches, working with Oracle Global Support Services, and other scenarios. It is not advisable to carry out such tasks on a live production system, even if it is the only environment enabled to use Oracle Real Application Clusters.
The goal of this document (and the patches mentioned herein) is to provide a rapid, clear-cut, and easily achievable method of cloning an Oracle RAC enabled E-Business Suite Release 12 environment to a new set of machines on which a duplicate RAC enabled E-Business Suite system is to be deployed.
This process will be referred to as RAC-To-RAC cloning from here on.

1.1.2 Cluster Terminology

You should understand the terminology used in a cluster environment. Key terms include the following.
  • Automatic Storage Management (ASM) is an Oracle database component that acts as an integrated file system and volume manager, providing the performance of raw devices with the ease of management of a file system. In an ASM environment, you specify a disk group rather than the traditional datafile when creating or modifying a database structure such as a tablespace. ASM then creates and manages the underlying files automatically.
  • Oracle Cluster File System (OCFS2) is a general purpose cluster file system which can, for example, be used to store Oracle database files on a shared disk.
  • Certified Network File Systems is an Oracle-certified network attached storage (NAS) filer: such products are available from EMC, HP, NetApp, and other vendors. See the Oracle Release 10g, 11g or 12c Real Application Clusters installation and user guides for details on supported NAS devices and certified cluster file systems.
  • Cluster Ready Services (CRS) is the primary program that manages high availability operations in an Oracle RAC environment. The crs process manages designated cluster resources, such as databases, instances, services, and listeners.
  • Oracle Real Application Clusters (Oracle RAC) is a database feature that allows multiple machines to work on the same data in parallel, reducing processing time. Of equal or greater significance, depending on the specific need, an Oracle RAC environment also offers resilience if one or more machines become temporarily unavailable as a result of planned or unplanned downtime.

1.3 Prerequisites

  • This document is only for use in RAC-To-RAC cloning of a source Oracle E-Business Suite Release 12 RAC System to a target Oracle E-Business Suite RAC System.
  • The steps described in this note are for use by accomplished Applications and Database Administrators, who should be:
    • Familiar with the principles of cloning an Oracle E-Business Suite system, as described in OracleMetaLink Note 406982.1Cloning Oracle Applications Release 12 with Rapid Clone.
    • Familiar with Oracle Database Server 10g,11g or 12c and have at least a basic knowledge of Oracle Real Application Clusters (Oracle RAC).
    • Experienced in the use of of RapidClone, AutoConfig, and AD utilities, as well as the steps required to convert from a single instance Oracle E-Business Suite installation to a RAC-enabled one.
  • The source system must remain in a running and active state during database Image creation.
  • The addition of database RAC nodes (beyond the assumed secondary node) is, from the RapidClone perspective, easily handled. However, the Clusterware software stack and cluster-specific configuration must be in place first, to allow RapidClone to configure the database technology stack properly. The CRS specific steps required for the addition of database nodes are briefly covered further in Appendix A however the Oracle Clusterware product documentation should be referred to for greater detail and understanding.
  • Details such as operating system configuration of mount points, installation and configuration of ASM, OCFS2, NFS or other forms of cluster file systems are not covered in this document.
  • Oracle Clusterware installation and component service registration are not covered in this document.
  • Following are useful references when planning to set up Real Application Clusters and shared devices:

1.4 Restrictions

Before using RapidClone to create a clone of an Oracle E-Business Suite Release 12 RAC-enabled system, you should be aware of the following restrictions and limitations:
  • This RAC-To-RAC cloning procedure can be used on Oracle Database 10g, 11g and 12c RAC Systems.
  • The final cloned RAC environment will:
    • Use the Oracle Managed Files option for datafile names.
    • Contain the same number of redo log threads as the source system.
    • Have all datafiles located under a single "DATA_TOP" location.
    • Contain only a single control file, without any of the extra copies that the DBA typically expects.
  • During the cloning process, no allowance is made for the use of a Flash Recovery Area (FRA). If an FRA needs to be configured on the target system, it must be done manually.
  • At the conclusion of the cloning process, the final cloned Oracle RAC environment will use a pfile (parameter file) instead of an spfile. For proper CRS functionality, you should create an spfile and locate it in a shared storage location that is accessible from both Oracle RAC nodes.
  • Beside ASM and OCFS2, only NetApp branded devices (certified NFS clustered file systems) have been confirmed to work at present. While other certified clustered file systems should work for RAC-To-RAC cloning, shared storage combinations not specifically mentioned in this the article are not guaranteed to work, and will therefore only be supported on a best-efforts basis.

Section 2: Configuration Requirements for the Source Oracle RAC System

2.1 Required Patches

Please refer to My Oracle Support Knowledge Document 406982.1"Cloning Oracle Applications Release 12 with Rapid Clone" to obtain the latest required RapidClone Consolidated Update patch number. Download and apply the latest required RapidClone Consolidated Update patch at this time.
Warning: After applying any new Rapid Clone, AD or AutoConfig patch, the ORACLE_HOME(s) on the source system must be updated with the files included in those patches. To synchronize the Rapid Clone and AutoConfig files within the RDBMS ORACLE_HOME using the admkappsutil.pl utility, refer to OracleMetaLink Note 387859.1Using AutoConfig to Manage System Configurations in Oracle E-Business Suite Release 12, and follow the instructions in section System Configuration and Maintenance, subsection Patching AutoConfig

2.2 Supported Oracle RAC Migration

The source Oracle E-Business Suite RAC environment must be created in accordance with My Oracle Support Knowledge Document 388577.1Using Oracle 10g Release 2 Real Application Clusters and Automatic Storage Management with Oracle E-Business Suite Release 12. The RAC-To-RAC cloning process described here has only been validated for use on Oracle E-Business Suite Release 12 systems that have been converted to use Oracle RAC as per this note.

2.3 AutoConfig Compliance on Oracle RAC Nodes

Also in accordance with My Oracle Support Knowledge Document 388577.1, AutoConfig must have been used during Oracle RAC configuration of the source system (following conversion).

2.4 Supported Datafile Storage Methods

The storage method used for the source system datafiles must be one of the following Oracle 10g/11g/12c RAC Certified types:
  • NFS Clustered File Systems (such as NetApp Filers)
  • ASM (Oracle Automatic Storage Management)
  • OCFS2 (Oracle Cluster File System V2)

2.5 Archive Log Mode

The source system database instances must be in archive log mode, and the archive log files must be located within the shared storage area where the datafiles are currently stored. This conforms to standard Oracle RAC best practices.
Warning: If the source system was not previously in archive log mode, but it has recently been enabled, or if the source system parameter ARCHIVE_LOG_DEST was at some point set to any local disk directory location, you must ensure that RMAN has a properly maintained list of valid archive logs located exclusively in the shared storage area.
To confirm RMAN knows only of your archive logs located on the shared disk storage area, do the following.
First, use SQL*Plus or RMAN to show the locations of the archive logs. For example:
SQL>archive log list
If the output shows a local disk location, change this location appropriately, and back up or relocated any archive log files to the shared storage area. It will then be necessary to correct the RMAN archive log manifest, as follows:
RMAN>crosscheck archivelog all;
Review the output archive log file locations and, assuming you have relocated or removed any locally stored archive logs, you will need to correct the invalid or expired archive logs as follows:
RMAN>delete expired archivelog all;
It is essential to carry out the above steps (if applicable) before you continue with the Oracle E-Business Suite Release 12 RAC cloning procedure.

2.6 Control File Location

The database instance control files must be located in the shared storage area as well.

Section 3: Configuration Requirements for the Target RAC System

3.1 User Equivalence between Oracle RAC Nodes

Set up ssh and rsh user equivalence (that is, without password prompting) between primary and secondary target Oracle RAC nodes. This is described in the following documents:
Note: SSH connectivity can also be setup automatically during installation. This is described in:

3.2 Install Cluster Manager

Install Oracle Cluster Manager, and update the version to match that of the source system database. For example, if the original source system database is 10.2.0.3, Cluster Manager must also be patched to the 10.2.0.3 level.

3.3 Verify Shared Mount Points or Disks

Ensure that all shared disk sub-systems are fully and properly configured: they need to have adequate space, be writable by the future oracle software owner, and be accessible from both primary and secondary nodes.
Note: For details on configuring ASM, OCFS2, and NFS with NetApp Filer, see the following articles:
Note: For ASM target deployments, it is strongly recommended that a separate $ORACLE_HOME be installed for ASM management, whatever the the location of your ASM listener configuration, and it is required to change the default listener configuration via the netca executable. The ASM default listener name (or service name) must not be of the form LISTENER_[HOSTNAME]. This listener name (LISTENER_[HOSTNAME]) will be specified and used later by AutoConfig for the RAC-enabled Oracle E-Business Suite database listener.

3.4 Verify Network Layer Interconnects

Ensure that the network layer is properly defined for private, public and VIP (Clusterware) Interconnects. This should not be a problem if runcluvfy.sh from the Oracle Clusterware software stage area was executed without error prior to CRS installation.

Section 4: Preparing the Source Oracle RAC System for Cloning

4.1 Update the File System with the latest Oracle RAC Patches

The latest RapidClone Consolidate Update patch (with the post-patch steps in its README) and all pre-requisite patches should have already been applied above from Section 2 of this note. After patch application, adpreclone.pl must be re-executed on all the application tiers and database tiers. For example, on the database tier, the following command would be used:
$ cd $ORACLE_HOME/appsutil/scripts/[context_name]
$ perl adpreclone.pl dbTier
After executing adpreclone.pl on all all the application and database tiers, perform the steps below.

4.2 Create Database Image

Note: Do NOT shut down the source system database services to complete the steps on this section. The database must remain mounted and open for the imaging process to successfully complete. RapidClone for RAC-enabled Oracle E-Business Suite Release 12 systems operates differently from single instance cloning.
Login to the primary Oracle RAC node, navigate to [ORACLE_HOME]/appsutil/clone/bin, and run the adclone.pl utility from a shell as follows:
perl adclone.pl \
java=[JDK 1.5 Location] \
mode=stage \
stage=[Stage Directory] \
component=database \
method=RMAN \
dbctx=[RAC DB Context File] \
showProgress
Where:
ParameterUsage
stageAny directory or mount point location outside the current ORACLE_HOME location, with enough space to hold the existing database datafiles in an uncompressed form.
dbctxFull Path to the existing Oracle RAC database context file.
The above command will create a series of directories under the specified stage location.
After the stage creation is completed, navigate to [stage]/data/stage. In this directory, you will find several 2GB RMAN backup/image files. These files will have names like "1jj9c44g_1_1". The number of files present will depend on the source system configuration. The files, along with the "backup_controlfile.ctl", will need to be transferred to the target system upon which you wish to creation your new primary Oracle RAC node.

These files should be placed into a temporary holding area, which will ultimately be removed later.

4.3 Archive the ORACLE_HOME

Note: The database may be left up and running during the ORACLE_HOME archive creation process.
Create an archive of the source system ORACLE_HOME on the primary node:
$ cd $ORACLE_HOME/..
$ tar -cvzf rac_db_oh.tgz [DATABASE TOP LEVEL DIRECTORY]
Note: Consider using data integrity utilities such as md5sum, sha1sum, or cksum to validate the file sum both before and after transfer to the target system.
This source system ORACLE_HOME archive should now be transferred to the target system RAC nodes upon which you will be configuring the new system, and placed in the directory you wish to use as the new $ORACLE_HOME.

Section 5: RAC-to-RAC Cloning

5.1 Target System Primary Node Configuration (Clone Initial Node)

Follow the steps below to clone the primary node (i.e. Node 1) to the new target system.

5.1.1 Uncompress ORACLE_HOME

Uncompress the ORACLE_HOME archive that was transferred from the source system. Choose a suitable location, and rename the extracted top-level directory name to something meaningful on the new target system.
$ tar -xvzf rac_db_oh.tgz

5.1.2 Create pairsfile.txt File for Primary Node

Create a [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt text file with contents as shown below:
s_undo_tablespace=[UNDOTBS1 for Initial Node]
s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
s_db_oh=[Location of new ORACLE_HOME]

5.1.3 Create Context File for Primary Node

Execute the following command to create a new context file, providing carefully determined answers to the prompts.
Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility with the following parameters:
perl adclonectx.pl \
contextfile=[PATH to OLD Source RAC contextfile.xml] \
template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
initialnode
Where:
ParameterUsage
contextfileFull path to the old source RAC database context file.
templateFull path to the existing database context file template.
pairsfileFull path to the pairsfile created in the last step.
Note: A new and unique global database name (DB name) must be selected when creating the new target system context file. Do not use the source system global database name or sid name uring any of the context file interview prompts as shown below.
You will be present with the following questions [sample answers provided]:
Target System Hostname (virtual or normal) [kawasaki] [Enter appropriate value if not defaulted]

Do you want the inputs to be validated (y/n) [n] ? : [Enter n]

Target Instance is RAC (y/n) [y] : [Enter y]

Target System Database Name : [Enter new desired global DB name, not a SID; motoGP global name was selected here]

Do you want the target system to have the same port values as the source system (y/n) [y] ? : [Select yes or no]

Provide information for the initial RAC node:

Host name [ducati] : [Always need to change this value to the current public machine name, for example kawasaki]

Virtual Host name [null] : [Enter the Clusterware VIP interconnect name, for example kawasaki-vip ]

Instance number [1] : 1 [Enter 1, as this will always be the instance number when you are on the primary target node]

Private interconnect name [kawasaki] [Always need to change this value; enter the private interconnect name, such as kawasaki-priv]

Target System quorum disk location required for cluster manager and node monitor : /tmp [Legacy parameter; just enter /tmp]

Target System cluster manager service port : 9998 [This is a default port used for CRS ]

Target System Base Directory : [Enter the base directory that contains the new_oh_loc dir]

Oracle OS User [oracle] : [Should default to correct current user; just hit enter]

Oracle OS Group [dba] : [Should default to correct current group; just hit enter]

Target System utl_file_dir Directory List : /usr/tmp [Specify an appropriate value for your requirements]

Number of DATA_TOP's on the Target System [2] : 1 [At present, you can only have one data_top with RAC-To-RAC cloning]

Target System DATA_TOP Directory 1 : +APPS_RAC_DISK [The shared storage location; ASM diskgroup/NetApps NFS mount point/OCFS2 mount point]

Do you want to preserve the Display [null] (y/n) ? : [Respond according to your requirements]

New context path and file name [/s1/atgrac/racdb/appsutil/motoGP1_kawasaki.xml] : [Double-check proposed location, and amend if needed]

Note: It is critical that the correct values are selected above: if you are uncertain, review the newly-written context file and compare it with values selected during source system migration to RAC (as per OracleMetalink Note 388577.1).
When making comparisons, always ensure that any path differences between the source and target systems are understood and accounted for.

 

Note: If the most current AutoConfig Template patch as listed in OracleMetalink Note 387859.1 has already been applied to the source system, it is necessary to edit the value of the target system context variable "s_db_listener" to reflect the desired name for the tns listener. The traditionally accepted value is "LISTENER_[HOSTNAME]" but may be any valid value unique to the target host.

 

5.1.4 Restore Database on Target System Primary Node

Warning: It is NOT recommended to clone an E-Business Suite RAC enabled environments to the same host however if the source and target systems must be the same host, make certain the source system is cleanly shutdown and the datafiles moved to a temporarily inaccessible location prior to restoring/recovering the new target system.
Failure to understand this warning could result in corrupt redo logs on the source system. Same host RAC cloning requires the source system to be down.
Warning: In addition to same host RAC node cloning, it is also NOT recommended to attempt cloning E-Business Suite RAC enabled environments to a target system which can directly access source system dbf files (perhaps via an nfs shared mount). If the intended target file system has access to the to the source dbf files, corruption of redo log files can occur on the source system. It is also possible that corruption might occur if ANY dbf files exist on the new intended target file system which match the original source mount point [i.e. /foo/datafiles]. If existing datafiles on the target are in a file system location as is present on the source server [i.e. /foo/datafiles], shutdown the database which owns these datafiles.
Failure to understand this warning could result in corrupt redo logs on the source system or any existing database on the target host, having a mount point the same as the original and perhaps unrelated source system. If unsure, shutdown any database which stroes datafiles in a path which existed on the source system and in which datafiles were stored.
Restore the database after the new ORACLE_HOME is configured.

5.1.4.1 Run adclone.pl to Restore and Rename Database on New Target System

Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run Rapid Clone (adclone.pl utility) with the following parameters:
perl adclone.pl \
java=[JDK 1.5 Location] \
component=dbTier \
mode=apply \
stage=[ORACLE_HOME]/appsutil/clone \
method=CUSTOM \
dbctxtg=[Full Path to the Target Context File] \
rmanstage=[Location of the Source RMAN dump files... i.e. RMAN_STAGE/data/stage] \
rmantgtloc=[Shared storage loc for datafiles...ASM diskgroup / NetApps NFS mount / OCFS mount point]
srcdbname=[Source RAC system GLOBAL name] \
pwd=[APPS Password] \
showProgressode
Where:
ParameterUsage
javaFull path to the directory where JDK 1.5 is installed.
stageThis parameter is static and refers to the newly-unzipped [ORACLE_HOME]/appsutil/clone directory.
dbctxtgFull path to the new context file created by adclonectx.pl under [ORACLE_HOME]/appsutil.
rmanstageTemporary location where you have placed database "image" files transferred from the source system to the new target host.
rmantgtlocBase directory or ASM diskgroup location into which you wish the database (dbf) files to be extracted. The recreation process will create subdirectories of [GLOBAL_DB_NAME]/data, into which the dbf files will be placed. Only the shared storage mount point top level location needs be supplied.
srcdbnameSource system GLOBAL_DB_NAME (not the SID of a specific node). Refer to the source system context file parameter s_global_database_name. Note that no domain suffix should be added.
pwdPassword for the APPS user.
Note: The directories and mount points selected for the rmanstage and rmantgtloc locations should not contain datafiles for any other databases. The presence of unrelated datafiles may result in very lengthy restore operations, and on some systems a potential hang of the adclone.pl restore command .
Running the adclone.pl command may take several hours. From a terminal window, you can run:
$ tail -f [ORACLE_HOME]/appsutil/log/$CONTEXT_NAME/ ApplyDatabase_[time].log
This will display and periodically refresh the last few lines of the main log file (mentioned when you run adclone.pl), where you will see references to additional log files that can help show the current actions being executed.

Note: If the database version is 12c Release 1, ensure to add the following line in your sqlnet_ifile.ora after adclone.pl execution completes:
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8 (if the initialization parameter SEC_CASE_SENSITIVE_LOGON is set to FALSE)
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10 (if SEC_CASE_SENSITIVE_LOGON is set to TRUE)

5.1.4.2 Verify TNS Listener has been started

After the above process exits, and it has been confirmed that no errors were encountered, you will have a running database and TNS listener, with the new SID name chosen earlier.
Confirm that the TNS listener is running, and has the appropriate service name format as follows:
$ ps -ef | grep tns | awk '{ print $9}'
The output from the above command should return a string of the form LISTENER_[hostname]. If does not, verify the listener.ora file in the $TNS_ADMIN location before continuing with the next steps: the listener must be up and running before executing AutoConfig.

5.1.4.3 Run AutoConfig

At this point, the new database is fully functional. However, to complete the configuration you must navigate to [ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME] and execute the following command to run AutoConfig:
$ adautocfg.sh appspass=[APPS Password]

5.2 Target System Secondary Node Configuration (Clone Additional Nodes)

Follow the steps below to clone the secondary nodes (for example, Node 2) on to the target system.

5.2.1 Uncompress the archived ORACLE_HOME transferred from the Source System

Uncompress the source system ORACLE_HOME archive to a location matching that present on your target system primary node. The directory structure should match that present on the newly created target system primary node.
$ tar -xvzf rac_db_oh.tgz

5.2.2 Archive the [ORACLE_HOME]/appsutil directory structure from the new Primary Node

Log in to the new target system primary node, and execute the following commands:
$ cd [ORACLE_HOME]
$ zip -r appsutil_node1.zip appsutil

5.2.3 Copy appsutil_node1.zip to the Secondary Target Node

Transfer and then expand the appsutil_node1.zip into the secondary target RAC node [NEW ORACLE_HOME].
$ cd [NEW ORACLE_HOME]
$ unzip -o appsutil_node1.zip

5.2.4 Update pairsfile.txt for the Secondary Target Node

Alter the existing pairsfile.txt (from the first target node) and change the s_undo_tablespace parameter. As this is the second node, the correct value would be UNDOTBS2. As an example, the [NEW_ORACLE_HOME]/appsutils/clone/pairsfile.txt would look like:
s_undo_tablespace=[Or UNDOTBS(+1) for additional Nodes]
s_dbClusterInst=[Total number of Instances in a cluster e.g. 2]
s_db_oh=[Location of new ORACLE_HOME]

5.2.5 Create a Context File for the Secondary Node

Navigate to [NEW_ORACLE_HOME]/appsutil/clone/bin and run the adclonectx.pl utility as follows:
perl adclonectx.pl \
contextfile=[Path to Existing Context File from the First Node] \
template=[NEW ORACLE_HOME]/appsutil/template/adxdbctx.tmp \
pairsfile=[NEW ORACLE_HOME]/appsutil/clone/pairsfile.txt \
addnode
Where:
ParameterUsage
contextfileFull path to the existing context file from the first (primary) node.
templateFull path to the existing database context file template.
pairsfileFull path to the pairsfile created on last step.
Several of the interview prompts are the same as on Node 1. However, there are some new questions which are specific to the "addnode" option used when on the second node.
Note: When answering the questions below, review your responses carefully before entering them. The rest of the inputs (not shown) are the same as those encountered during the context file creation on the initial node (primary node).
Host name of the live RAC node : kawasaki [enter appropriate value if not defaulted]

Domain name of the live RAC node : yourdomain.com [enter appropriate value if not defaulted]

Database SID of the live RAC node : motoGP1 [enter the individual SID, NOT the Global DB name]

Listener port number of the live RAC node : 1548 [enter the port # of the Primary Target Node you just created]

Provide information for the new Node:

Host name : suzuki [enter appropriate value if not defaulted, like suzuki]

Virtual Host name : suzuki-vip [enter the Clusterware VIP interconnect name, like suzuki-vip.yourdomain.com]

Instance number : 2 [enter the instance # for this current node]

Private interconnect name : suzuki-priv [enter the private interconnect name, like suzuki-priv]

Current Node:

Host Name : suzuki

SID : motoGP2

Instance Name : motoGP2

Instance Number : 2

Instance Thread : 2

Undo Table Space: UNDOTBS2 [enter value earlier added to pairsfile.txt, if not defaulted]

Listener Port : 1548

Target System quorum disk location required for cluster manager and node monitor : [legacy parameter, enter /tmp]

Note: At the conclusion of these "interview" questions related to context file creation, look carefully at the generated context file and ensure that the values contained therein compare to the values entered during context file creation on Node 1. The values should be almost identical, a small but important exception being the local instance name will have a number 2 instead of a 1.

 

Note: If the most current AutoConfig Template patch as listed in OracleMetalink Note 387859.1 has already been applied to the source system, it is necessary to edit the value of the target system context variable "s_db_listener" to reflect the desired name for the tns listener. The traditionally accepted value is "LISTENER_[HOSTNAME]" but may be any valid value unique to the target host

5.2.6 Configure NEW ORACLE_HOME

Run the commands below to move to the correct directory and continue the cloning process:
$ cd [NEW ORACLE_HOME]/appsutil/clone/bin
$ perl adcfgclone.pl dbTechStack [Full path to the database context file created in previous step]
Note: At the conclusion of this command, you will receive a console message indicating that the process exited with status 1 and that the addlnctl.sh script failed to start a listener named [SID]. That is expected, as this is not the proper service name. Start the proper listener by executing the following command:

[NEW_ORACLE_HOME]/appsutil/scripts/[CONTEXT_NAME]/addlnctl.sh start LISTENER_[hostname].

This command will start the correct (RAC-specific) listener with the proper service name.


Note: If the database version is 12c Release 1, ensure to add the following line in your sqlnet_ifile.ora after adcfgclone.pl execution completes:
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 8 (if the initialization parameter SEC_CASE_SENSITIVE_LOGON is set to FALSE)
  • SQLNET.ALLOWED_LOGON_VERSION_SERVER = 10 (if SEC_CASE_SENSITIVE_LOGON is set to TRUE)

5.2.7 Source the new environment file in the ORACLE_HOME

Run the commands below to move to the correct directory and source the environment:
$ cd [NEW ORACLE_HOME]
$ ./[CONTEXT_NAME].env

5.2.8 Modify [SID]_APPS_BASE.ora

Edit the [SID]_APPS_BASE.ora file and change the control file parameter to reflect the correct control file location on the shared storage. This will be the same value as in the [SID]_APPS_BASE.ora on the target system primary node which was just created.

5.2.9 Start Oracle RAC Database

Start the database using the following commands:
$ sqlplus /nolog
SQL> connect / as sysdba
SQL> startup

5.2.10 Execute AutoConfig

Run AutoConfig to generate the proper listener.ora and tnsnames.ora files:
$ cd $ORACLE_HOME/appsutil/scripts/$CONTEXT_NAME
$ ./adautocfg.sh appspass=[APPS Password]

5.3 Carry Out Target System (Primary Node) Final Oracle RAC Configuration Tasks

5.3.1 Recreate TNSNAMES and LISTENER.ORA

Login again to the target primary node (Node 1) and run AutoConfig to perform the final Oracle RAC configuration and create new listener.ora and tnsnames.ora (as the FND_NODES table did not contain the second node hostname until AutoConfig was run on the secondary target RAC node).
$ cd $ORACLE_HOME/appsutil/scripts/[CONTEXT_NAME]
$ ./adautocfg.sh appspass=[APPS Password]
Note: This execution of AutoConfig on the primary target RAC Node 1 will add the second RAC Node connection information to the first node's tnsnames.ora, such that listener load balancing can occur. If you have more than two nodes in your new target system cluster, you must repeat Sections 4.2 and 4.3 for all subsequent nodes.

Section 6: RAC to Single Instance Cloning

It is now possible to clone from a RAC enabled E-Business Suite (source) environment to a Single Instance E-Business Suite (target) environment following nearly the same process detailed above in Section 5.
To clone from a RAC source environment to a Single Instance target, the image creation process as noted in Section 4 remains unchanged. On the target host system however, while working through Section 5, the context file creation (step 5.1.3 above) should be done as in the case of Single Instance cloning. All other primary target restore tasks remain the same from Section 5 in the case of Single Instance Restore. Disregard any references to secondary node configuration (starting at step 5.2) as it would not apply here.
For example:
Target Instance is RAC (y/n) [y] : [Enter n]
Because we are cloning the context file from a RAC enabled source system, the interview question above pre-selects a default value of being a RAC Instance. Be certain to select "N" for the above question. By creating a context file with without RAC attributes present, Rapid Clone will configure and convert the RDBMS Technology Stack and its binaries on the target system such that a Single Instance restore can be performed.
The Rapid Clone command to restore the database on the target system (step 5.1.4) remains the same whether the target is to be RAC or Single Instance.
Note: In a RAC to Single Instance Cloning scenario, there are no data structure changes to the database in regards to UNDO tablespaces or REDO log groups or members. These data structures will remain as were present in the source system RAC database. In some use cases, it might be advisable for the DBA to reduce the complexity carried over from the source RAC environment.

Section 7: Applications Tier Cloning for RAC

The target system Applications Tier may be located in any one of these locations:
  • Primary target database node
  • Secondary target database node
  • An independent machine, running neither of the target system RAC nodes
  • Shared between two or more machines
Because of the complexities which might arise, it is suggested that the applications tier should initially be configured to connect to a single database instance. After proper configuration with one of the two target system RAC nodes has been achieved, context variable changes can be made such that JDBC and TNS Listener load balancing are enabled.

7.1 Clone the Applications Tier

In order to clone the applications tier, follow the standard steps for the applications node posted on Sections 2 and 3 from OracleMetalink Note 406982.1Cloning Oracle Applications Release 12 with Rapid Clone. This includes adpreclone steps, copy the bits to the target, configuration portion, and finishing tasks steps.

Note: On the applications tier, during the adcfgclone.pl execution, you will be asked for a database to which the applications tier services should connect to. Enter the information specific to a single target system RAC node (such as the primary node). On successful completion of this step, the applications node services will be started, and you should be able to log in and use the new target system Applications system.

7.2 Configure Application Tier JDBC and Listener Load Balancing

Reconfigure the applications node context variables such that database listener/instance load balancing can occur.
Note: The following details have been extracted from OracleMetalink Note 388577.1 for your convenience. Consult this note for further information.
Implement load balancing for the Applications database connections:
  1. Run the context editor (through Oracle Applications Manager) and set the value of "Tools OH TWO_TASK" (s_tools_two_task),"iAS OH TWO_TASK"(s_weboh_twotask) and "Apps JDBC Connect Alias" (s_apps_jdbc_connect_alias).
  2. To load-balance the Forms-based Applications database connections, set the value of "Tools OH TWO_TASK" to point to the [database_name]_balance alias generated in the tnsnames.ora file.
  3. To load-balance the self-service Applications database connections, set the value of iAS OH TWO_TASK" and "Apps JDBC Connect Alias" to point to the [database_name]_balance alias generated in the tnsnames.ora file.
  4. Execute AutoConfig by running the command:
    cd $ADMIN_SCRIPTS_HOME; ./adautocfg.sh
  5. After successful completion of AutoConfig, restart the Applications tier processes via the scripts located in $ADMIN_SCRIPTS_HOME.
  6. Ensure that value of the profile option "Application Database ID" is set to dbc file name generated at $FND_SECURE.

Section 8: Advanced Cloning Scenarios

8.1 Cloning the Database Separately

In certain cases, customers may require the RAC database to be recreated separately, without using the full lock-step mechanism employed during a regular E-Business Suite RAC RapidClone scenario.
This section documents the steps needed to allow for manual creation of the target RAC database control files (or the reuse of existing control files) within the Rapid Clone process.
Unless otherwise noted, all command are specific to the primary target database instance.
Follow ONLY steps 1 and 2 in Section 2: Cloning Tasks of OracleMetalink Note 406982.1, then continue with these steps below to complete Cloning the Database Separately.
  1. Log on to the primary target system host as the ORACLE UNIX user.
  2. Configure the [RDBMS ORACLE_HOME] as note above in Section 5: RAC-to-RAC Cloning: execute ONLY steps 5.1.1, 5.1.2 and 5.1.3
  3. Create the target database control files manually (if needed) or modify the existing control files as needed to define datafile, redo and archive log locations along with any other relevant and required setting.

    In this step, you copy and recreate the database using your preferred method, such as RMAN restore, Flash Copy, Snap View, or Mirror View.
  4. Start the new target RAC database in open mode.
  5. Run the library update script against the RAC database.
    $ cd [RDBMS ORACLE_HOME]/appsutil/install/[CONTEXT_NAME]
    $ sqlplus "/ as sysdba" @adupdlib.sql [libext]
    Where [libext] should be set to 'sl' for HP-UX, 'so' for any other UNIX platform, or 'dll' for Windows.
  6. Configure the primary target database
    The database must be running and open before performing this step.
    $ cd [RDBMS ORACLE_HOME]/appsutil/clone/bin
    $ perl adcfgclone.pl dbconfig [Database target context file]
    Where Database target context file is: [RDBMS ORACLE_HOME]/appsutil/[Target CONTEXT_NAME].xml.
    Note: The dbconfig option will configure the database with the required settings for the new target, but it will not recreate the control files.
  7. When the above tasks (1-6) are completed on the primary target database instance, see "5.2 Target System Secondary Node Configuration(Clone Additional Nodes)" to configure any secondary database instances.

8.2 Additional Advanced RAC Cloning Scenarios

Rapid Clone is only certified for RAC-to-RAC and RAC-to-Single Instance Cloning at this time. Addition or removal of RAC nodes during the cloning process is not currently supported. 

Appendix A: Configuring Oracle Clusterware on the Target System Database Nodes

Associating Target System Oracle RAC Database instances and listeners with Clusterware (CRS)


Add target system database, instances and listeners to CRS by running the following commands as the owner of the CRS installation:

$ srvctl add database -d [database_name] -o [oracle_home]
$ srvctl add instance -d [database_name] -i [instance_name] -n [host_name]
$ srvctl add service -d [name] -s [service_name]